Sample records for common error type

  1. Remediating Common Math Errors.

    ERIC Educational Resources Information Center

    Wagner, Rudolph F.

    1981-01-01

    Explanations and remediation suggestions for five types of mathematics errors due either to perceptual or cognitive difficulties are given. Error types include directionality problems, mirror writing, visually misperceived signs, diagnosed directionality problems, and mixed process errors. (CL)

  2. [Refractive errors in patients with cerebral palsy].

    PubMed

    Mrugacz, Małgorzata; Bandzul, Krzysztof; Kułak, Wojciech; Poppe, Ewa; Jurowski, Piotr

    2013-04-01

    Ocular changes are common in patients with cerebral palsy (CP) and they exist in about 50% of cases. The most common are refractive errors and strabismus disease. The aim of the paper was to estimate the relativeness between refractive errors and neurological pathologies in patients with selected types of CP. MATERIAL AND METHODS. The subject of the analysis was showing refractive errors in patients within two groups of CP: diplegia spastica and tetraparesis, with nervous system pathologies taken into account. Results. This study was proven some correlations between refractive errors and type of CP and severity of the CP classified in GMFCS scale. Refractive errors were more common in patients with tetraparesis than with diplegia spastica. In the group with diplegia spastica more common were myopia and astigmatism, however in tetraparesis - hyperopia.

  3. Medication Errors in Pediatric Anesthesia: A Report From the Wake Up Safe Quality Improvement Initiative.

    PubMed

    Lobaugh, Lauren M Y; Martin, Lizabeth D; Schleelein, Laura E; Tyler, Donald C; Litman, Ronald S

    2017-09-01

    Wake Up Safe is a quality improvement initiative of the Society for Pediatric Anesthesia that contains a deidentified registry of serious adverse events occurring in pediatric anesthesia. The aim of this study was to describe and characterize reported medication errors to find common patterns amenable to preventative strategies. In September 2016, we analyzed approximately 6 years' worth of medication error events reported to Wake Up Safe. Medication errors were classified by: (1) medication category; (2) error type by phase of administration: prescribing, preparation, or administration; (3) bolus or infusion error; (4) provider type and level of training; (5) harm as defined by the National Coordinating Council for Medication Error Reporting and Prevention; and (6) perceived preventability. From 2010 to the time of our data analysis in September 2016, 32 institutions had joined and submitted data on 2087 adverse events during 2,316,635 anesthetics. These reports contained details of 276 medication errors, which comprised the third highest category of events behind cardiac and respiratory related events. Medication errors most commonly involved opioids and sedative/hypnotics. When categorized by phase of handling, 30 events occurred during preparation, 67 during prescribing, and 179 during administration. The most common error type was accidental administration of the wrong dose (N = 84), followed by syringe swap (accidental administration of the wrong syringe, N = 49). Fifty-seven (21%) reported medication errors involved medications prepared as infusions as opposed to 1 time bolus administrations. Medication errors were committed by all types of anesthesia providers, most commonly by attendings. Over 80% of reported medication errors reached the patient and more than half of these events caused patient harm. Fifteen events (5%) required a life sustaining intervention. Nearly all cases (97%) were judged to be either likely or certainly preventable. Our findings characterize the most common types of medication errors in pediatric anesthesia practice and provide guidance on future preventative strategies. Many of these errors will be almost entirely preventable with the use of prefilled medication syringes to avoid accidental ampule swap, bar-coding at the point of medication administration to prevent syringe swap and to confirm the proper dose, and 2-person checking of medication infusions for accuracy.

  4. The Effectiveness of Chinese NNESTs in Teaching English Syntax

    ERIC Educational Resources Information Center

    Chou, Chun-Hui; Bartz, Kevin

    2007-01-01

    This paper evaluates the effect of Chinese non-native English-speaking teachers (NNESTs) on Chinese ESL students' struggles with English syntax. The paper first classifies Chinese learners' syntactic errors into 10 common types. It demonstrates how each type of error results from an internal attempt to translate a common Chinese construction into…

  5. Prevalence of amblyopia and patterns of refractive error in the amblyopic children of a tertiary eye care center of Nepal.

    PubMed

    Sapkota, K; Pirouzian, A; Matta, N S

    2013-01-01

    Refractive error is a common cause of amblyopia. To determine prevalence of amblyopia and the pattern and the types of refractive error in children with amblyopia in a tertiary eye hospital of Nepal. A retrospective chart review of children diagnosed with amblyopia in the Nepal Eye Hospital (NEH) from July 2006 to June 2011 was conducted. Children of age 13+ or who had any ocular pathology were excluded. Cycloplegic refraction and an ophthalmological examination was performed for all children. The pattern of refractive error and the association between types of refractive error and types of amblyopia were determined. Amblyopia was found in 0.7 % (440) of 62,633 children examined in NEH during this period. All the amblyopic eyes of the subjects had refractive error. Fifty-six percent (248) of the patients were male and the mean age was 7.74 ± 2.97 years. Anisometropia was the most common cause of amblyopia (p less than 0.001). One third (29 %) of the subjects had bilateral amblyopia due to high ametropia. Forty percent of eyes had severe amblyopia with visual acuity of 20/120 or worse. About twothirds (59.2 %) of the eyes had astigmatism. The prevalence of amblyopia in the Nepal Eye Hospital is 0.7%. Anisometropia is the most common cause of amblyopia. Astigmatism is the most common types of refractive error in amblyopic eyes. © NEPjOPH.

  6. False Positives in Multiple Regression: Unanticipated Consequences of Measurement Error in the Predictor Variables

    ERIC Educational Resources Information Center

    Shear, Benjamin R.; Zumbo, Bruno D.

    2013-01-01

    Type I error rates in multiple regression, and hence the chance for false positive research findings, can be drastically inflated when multiple regression models are used to analyze data that contain random measurement error. This article shows the potential for inflated Type I error rates in commonly encountered scenarios and provides new…

  7. Types of diagnostic errors in neurological emergencies in the emergency department.

    PubMed

    Dubosh, Nicole M; Edlow, Jonathan A; Lefton, Micah; Pope, Jennifer V

    2015-02-01

    Neurological emergencies often pose diagnostic challenges for emergency physicians because these patients often present with atypical symptoms and standard imaging tests are imperfect. Misdiagnosis occurs due to a variety of errors. These can be classified as knowledge gaps, cognitive errors, and systems-based errors. The goal of this study was to describe these errors through review of quality assurance (QA) records. This was a retrospective pilot study of patients with neurological emergency diagnoses that were missed or delayed at one urban, tertiary academic emergency department. Cases meeting inclusion criteria were identified through review of QA records. Three emergency physicians independently reviewed each case and determined the type of error that led to the misdiagnosis. Proportions, confidence intervals, and a reliability coefficient were calculated. During the study period, 1168 cases were reviewed. Forty-two cases were found to include a neurological misdiagnosis and twenty-nine were determined to be the result of an error. The distribution of error types was as follows: knowledge gap 45.2% (95% CI 29.2, 62.2), cognitive error 29.0% (95% CI 15.9, 46.8), and systems-based error 25.8% (95% CI 13.5, 43.5). Cerebellar strokes were the most common type of stroke misdiagnosed, accounting for 27.3% of missed strokes. All three error types contributed to the misdiagnosis of neurological emergencies. Misdiagnosis of cerebellar lesions and erroneous radiology resident interpretations of neuroimaging were the most common mistakes. Understanding the types of errors may enable emergency physicians to develop possible solutions and avoid them in the future.

  8. The Effects of Non-Normality on Type III Error for Comparing Independent Means

    ERIC Educational Resources Information Center

    Mendes, Mehmet

    2007-01-01

    The major objective of this study was to investigate the effects of non-normality on Type III error rates for ANOVA F its three commonly recommended parametric counterparts namely Welch, Brown-Forsythe, and Alexander-Govern test. Therefore these tests were compared in terms of Type III error rates across the variety of population distributions,…

  9. Refractive errors in Aminu Kano Teaching Hospital, Kano Nigeria.

    PubMed

    Lawan, Abdu; Eme, Okpo

    2011-12-01

    The aim of the study is to retrospectively determine the pattern of refractive errors seen in the eye clinic of Aminu Kano Teaching Hospital, Kano-Nigeria from January to December, 2008. The clinic refraction register was used to retrieve the case folders of all patients refracted during the review period. Information extracted includes patient's age, sex, and types of refractive error. All patients had basic eye examination (to rule out other causes of subnormal vision) including intra ocular pressure measurement and streak retinoscopy at two third meter working distance. The final subjective refraction correction given to the patients was used to categorise the type of refractive error. Refractive errors was observed in 1584 patients and accounted for 26.9% of clinic attendance. There were more females than males (M: F=1.0: 1.2). The common types of refractive errors are presbyopia in 644 patients (40%), various types of astigmatism in 527 patients (33%), myopia in 216 patients (14%), hypermetropia in 171 patients (11%) and aphakia in 26 patients (2%). Refractive errors are common causes of presentation in the eye clinic. Identification and correction of refractive errors should be an integral part of eye care delivery.

  10. Characteristics of pediatric chemotherapy medication errors in a national error reporting database.

    PubMed

    Rinke, Michael L; Shore, Andrew D; Morlock, Laura; Hicks, Rodney W; Miller, Marlene R

    2007-07-01

    Little is known regarding chemotherapy medication errors in pediatrics despite studies suggesting high rates of overall pediatric medication errors. In this study, the authors examined patterns in pediatric chemotherapy errors. The authors queried the United States Pharmacopeia MEDMARX database, a national, voluntary, Internet-accessible error reporting system, for all error reports from 1999 through 2004 that involved chemotherapy medications and patients aged <18 years. Of the 310 pediatric chemotherapy error reports, 85% reached the patient, and 15.6% required additional patient monitoring or therapeutic intervention. Forty-eight percent of errors originated in the administering phase of medication delivery, and 30% originated in the drug-dispensing phase. Of the 387 medications cited, 39.5% were antimetabolites, 14.0% were alkylating agents, 9.3% were anthracyclines, and 9.3% were topoisomerase inhibitors. The most commonly involved chemotherapeutic agents were methotrexate (15.3%), cytarabine (12.1%), and etoposide (8.3%). The most common error types were improper dose/quantity (22.9% of 327 cited error types), wrong time (22.6%), omission error (14.1%), and wrong administration technique/wrong route (12.2%). The most common error causes were performance deficit (41.3% of 547 cited error causes), equipment and medication delivery devices (12.4%), communication (8.8%), knowledge deficit (6.8%), and written order errors (5.5%). Four of the 5 most serious errors occurred at community hospitals. Pediatric chemotherapy errors often reached the patient, potentially were harmful, and differed in quality between outpatient and inpatient areas. This study indicated which chemotherapeutic agents most often were involved in errors and that administering errors were common. Investigation is needed regarding targeted medication administration safeguards for these high-risk medications. Copyright (c) 2007 American Cancer Society.

  11. WE-G-BRA-04: Common Errors and Deficiencies in Radiation Oncology Practice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kry, S; Dromgoole, L; Alvarez, P

    Purpose: Dosimetric errors in radiotherapy dose delivery lead to suboptimal treatments and outcomes. This work reviews the frequency and severity of dosimetric and programmatic errors identified by on-site audits performed by the IROC Houston QA center. Methods: IROC Houston on-site audits evaluate absolute beam calibration, relative dosimetry data compared to the treatment planning system data, and processes such as machine QA. Audits conducted from 2000-present were abstracted for recommendations, including type of recommendation and magnitude of error when applicable. Dosimetric recommendations corresponded to absolute dose errors >3% and relative dosimetry errors >2%. On-site audits of 1020 accelerators at 409 institutionsmore » were reviewed. Results: A total of 1280 recommendations were made (average 3.1/institution). The most common recommendation was for inadequate QA procedures per TG-40 and/or TG-142 (82% of institutions) with the most commonly noted deficiency being x-ray and electron off-axis constancy versus gantry angle. Dosimetrically, the most common errors in relative dosimetry were in small-field output factors (59% of institutions), wedge factors (33% of institutions), off-axis factors (21% of institutions), and photon PDD (18% of institutions). Errors in calibration were also problematic: 20% of institutions had an error in electron beam calibration, 8% had an error in photon beam calibration, and 7% had an error in brachytherapy source calibration. Almost all types of data reviewed included errors up to 7% although 20 institutions had errors in excess of 10%, and 5 had errors in excess of 20%. The frequency of electron calibration errors decreased significantly with time, but all other errors show non-significant changes. Conclusion: There are many common and often serious errors made during the establishment and maintenance of a radiotherapy program that can be identified through independent peer review. Physicists should be cautious, particularly in areas highlighted herein that show a tendency for errors.« less

  12. Evaluation of a UMLS Auditing Process of Semantic Type Assignments

    PubMed Central

    Gu, Huanying; Hripcsak, George; Chen, Yan; Morrey, C. Paul; Elhanan, Gai; Cimino, James J.; Geller, James; Perl, Yehoshua

    2007-01-01

    The UMLS is a terminological system that integrates many source terminologies. Each concept in the UMLS is assigned one or more semantic types from the Semantic Network, an upper level ontology for biomedicine. Due to the complexity of the UMLS, errors exist in the semantic type assignments. Finding assignment errors may unearth modeling errors. Even with sophisticated tools, discovering assignment errors requires manual review. In this paper we describe the evaluation of an auditing project of UMLS semantic type assignments. We studied the performance of the auditors who reviewed potential errors. We found that four auditors, interacting according to a multi-step protocol, identified a high rate of errors (one or more errors in 81% of concepts studied) and that results were sufficiently reliable (0.67 to 0.70) for the two most common types of errors. However, reliability was low for each individual auditor, suggesting that review of potential errors is resource-intensive. PMID:18693845

  13. A comparison of different statistical methods analyzing hypoglycemia data using bootstrap simulations.

    PubMed

    Jiang, Honghua; Ni, Xiao; Huster, William; Heilmann, Cory

    2015-01-01

    Hypoglycemia has long been recognized as a major barrier to achieving normoglycemia with intensive diabetic therapies. It is a common safety concern for the diabetes patients. Therefore, it is important to apply appropriate statistical methods when analyzing hypoglycemia data. Here, we carried out bootstrap simulations to investigate the performance of the four commonly used statistical models (Poisson, negative binomial, analysis of covariance [ANCOVA], and rank ANCOVA) based on the data from a diabetes clinical trial. Zero-inflated Poisson (ZIP) model and zero-inflated negative binomial (ZINB) model were also evaluated. Simulation results showed that Poisson model inflated type I error, while negative binomial model was overly conservative. However, after adjusting for dispersion, both Poisson and negative binomial models yielded slightly inflated type I errors, which were close to the nominal level and reasonable power. Reasonable control of type I error was associated with ANCOVA model. Rank ANCOVA model was associated with the greatest power and with reasonable control of type I error. Inflated type I error was observed with ZIP and ZINB models.

  14. An observational study of drug administration errors in a Malaysian hospital (study of drug administration errors).

    PubMed

    Chua, S S; Tea, M H; Rahman, M H A

    2009-04-01

    Drug administration errors were the second most frequent type of medication errors, after prescribing errors but the latter were often intercepted hence, administration errors were more probably to reach the patients. Therefore, this study was conducted to determine the frequency and types of drug administration errors in a Malaysian hospital ward. This is a prospective study that involved direct, undisguised observations of drug administrations in a hospital ward. A researcher was stationed in the ward under study for 15 days to observe all drug administrations which were recorded in a data collection form and then compared with the drugs prescribed for the patient. A total of 1118 opportunities for errors were observed and 127 administrations had errors. This gave an error rate of 11.4 % [95% confidence interval (CI) 9.5-13.3]. If incorrect time errors were excluded, the error rate reduced to 8.7% (95% CI 7.1-10.4). The most common types of drug administration errors were incorrect time (25.2%), followed by incorrect technique of administration (16.3%) and unauthorized drug errors (14.1%). In terms of clinical significance, 10.4% of the administration errors were considered as potentially life-threatening. Intravenous routes were more likely to be associated with an administration error than oral routes (21.3% vs. 7.9%, P < 0.001). The study indicates that the frequency of drug administration errors in developing countries such as Malaysia is similar to that in the developed countries. Incorrect time errors were also the most common type of drug administration errors. A non-punitive system of reporting medication errors should be established to encourage more information to be documented so that risk management protocol could be developed and implemented.

  15. Frequency and Type of Situational Awareness Errors Contributing to Death and Brain Damage: A Closed Claims Analysis.

    PubMed

    Schulz, Christian M; Burden, Amanda; Posner, Karen L; Mincer, Shawn L; Steadman, Randolph; Wagner, Klaus J; Domino, Karen B

    2017-08-01

    Situational awareness errors may play an important role in the genesis of patient harm. The authors examined closed anesthesia malpractice claims for death or brain damage to determine the frequency and type of situational awareness errors. Surgical and procedural anesthesia death and brain damage claims in the Anesthesia Closed Claims Project database were analyzed. Situational awareness error was defined as failure to perceive relevant clinical information, failure to comprehend the meaning of available information, or failure to project, anticipate, or plan. Patient and case characteristics, primary damaging events, and anesthesia payments in claims with situational awareness errors were compared to other death and brain damage claims from 2002 to 2013. Anesthesiologist situational awareness errors contributed to death or brain damage in 198 of 266 claims (74%). Respiratory system damaging events were more common in claims with situational awareness errors (56%) than other claims (21%, P < 0.001). The most common specific respiratory events in error claims were inadequate oxygenation or ventilation (24%), difficult intubation (11%), and aspiration (10%). Payments were made in 85% of situational awareness error claims compared to 46% in other claims (P = 0.001), with no significant difference in payment size. Among 198 claims with anesthesia situational awareness error, perception errors were most common (42%), whereas comprehension errors (29%) and projection errors (29%) were relatively less common. Situational awareness error definitions were operationalized for reliable application to real-world anesthesia cases. Situational awareness errors may have contributed to catastrophic outcomes in three quarters of recent anesthesia malpractice claims.Situational awareness errors resulting in death or brain damage remain prevalent causes of malpractice claims in the 21st century.

  16. Error identification and recovery by student nurses using human patient simulation: opportunity to improve patient safety.

    PubMed

    Henneman, Elizabeth A; Roche, Joan P; Fisher, Donald L; Cunningham, Helene; Reilly, Cheryl A; Nathanson, Brian H; Henneman, Philip L

    2010-02-01

    This study examined types of errors that occurred or were recovered in a simulated environment by student nurses. Errors occurred in all four rule-based error categories, and all students committed at least one error. The most frequent errors occurred in the verification category. Another common error was related to physician interactions. The least common errors were related to coordinating information with the patient and family. Our finding that 100% of student subjects committed rule-based errors is cause for concern. To decrease errors and improve safe clinical practice, nurse educators must identify effective strategies that students can use to improve patient surveillance. Copyright 2010 Elsevier Inc. All rights reserved.

  17. Frequency and types of the medication errors in an academic emergency department in Iran: The emergent need for clinical pharmacy services in emergency departments.

    PubMed

    Zeraatchi, Alireza; Talebian, Mohammad-Taghi; Nejati, Amir; Dashti-Khavidaki, Simin

    2013-07-01

    Emergency departments (EDs) are characterized by simultaneous care of multiple patients with various medical conditions. Due to a large number of patients with complex diseases, speed and complexity of medication use, working in under-staffing and crowded environment, medication errors are commonly perpetrated by emergency care providers. This study was designed to evaluate the incidence of medication errors among patients attending to an ED in a teaching hospital in Iran. In this cross-sectional study, a total of 500 patients attending to ED were randomly assessed for incidence and types of medication errors. Some factors related to medication errors such as working shift, weekdays and schedule of the educational program of trainee were also evaluated. Nearly, 22% of patients experienced at least one medication error. The rate of medication errors were 0.41 errors per patient and 0.16 errors per ordered medication. The frequency of medication errors was higher in men, middle age patients, first weekdays, night-time work schedules and the first semester of educational year of new junior emergency medicine residents. More than 60% of errors were prescription errors by physicians and the remaining were transcription or administration errors by nurses. More than 35% of the prescribing errors happened during the selection of drug dose and frequency. The most common medication errors by nurses during the administration were omission error (16.2%) followed by unauthorized drug (6.4%). Most of the medication errors happened for anticoagulants and thrombolytics (41.2%) followed by antimicrobial agents (37.7%) and insulin (7.4%). In this study, at least one-fifth of the patients attending to ED experienced medication errors resulting from multiple factors. More common prescription errors happened during ordering drug dose and frequency. More common administration errors included dug omission or unauthorized drug.

  18. Outlier removal, sum scores, and the inflation of the Type I error rate in independent samples t tests: the power of alternatives and recommendations.

    PubMed

    Bakker, Marjan; Wicherts, Jelte M

    2014-09-01

    In psychology, outliers are often excluded before running an independent samples t test, and data are often nonnormal because of the use of sum scores based on tests and questionnaires. This article concerns the handling of outliers in the context of independent samples t tests applied to nonnormal sum scores. After reviewing common practice, we present results of simulations of artificial and actual psychological data, which show that the removal of outliers based on commonly used Z value thresholds severely increases the Type I error rate. We found Type I error rates of above 20% after removing outliers with a threshold value of Z = 2 in a short and difficult test. Inflations of Type I error rates are particularly severe when researchers are given the freedom to alter threshold values of Z after having seen the effects thereof on outcomes. We recommend the use of nonparametric Mann-Whitney-Wilcoxon tests or robust Yuen-Welch tests without removing outliers. These alternatives to independent samples t tests are found to have nominal Type I error rates with a minimal loss of power when no outliers are present in the data and to have nominal Type I error rates and good power when outliers are present. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  19. Error analysis and correction of lever-type stylus profilometer based on Nelder-Mead Simplex method

    NASA Astrophysics Data System (ADS)

    Hu, Chunbing; Chang, Suping; Li, Bo; Wang, Junwei; Zhang, Zhongyu

    2017-10-01

    Due to the high measurement accuracy and wide range of applications, lever-type stylus profilometry is commonly used in industrial research areas. However, the error caused by the lever structure has a great influence on the profile measurement, thus this paper analyzes the error of high-precision large-range lever-type stylus profilometry. The errors are corrected by the Nelder-Mead Simplex method, and the results are verified by the spherical surface calibration. It can be seen that this method can effectively reduce the measurement error and improve the accuracy of the stylus profilometry in large-scale measurement.

  20. Heuristic errors in clinical reasoning.

    PubMed

    Rylander, Melanie; Guerrasio, Jeannette

    2016-08-01

    Errors in clinical reasoning contribute to patient morbidity and mortality. The purpose of this study was to determine the types of heuristic errors made by third-year medical students and first-year residents. This study surveyed approximately 150 clinical educators inquiring about the types of heuristic errors they observed in third-year medical students and first-year residents. Anchoring and premature closure were the two most common errors observed amongst third-year medical students and first-year residents. There was no difference in the types of errors observed in the two groups. Errors in clinical reasoning contribute to patient morbidity and mortality Clinical educators perceived that both third-year medical students and first-year residents committed similar heuristic errors, implying that additional medical knowledge and clinical experience do not affect the types of heuristic errors made. Further work is needed to help identify methods that can be used to reduce heuristic errors early in a clinician's education. © 2015 John Wiley & Sons Ltd.

  1. Analysis of error type and frequency in apraxia of speech among Portuguese speakers.

    PubMed

    Cera, Maysa Luchesi; Minett, Thaís Soares Cianciarullo; Ortiz, Karin Zazo

    2010-01-01

    Most studies characterizing errors in the speech of patients with apraxia involve English language. To analyze the types and frequency of errors produced by patients with apraxia of speech whose mother tongue was Brazilian Portuguese. 20 adults with apraxia of speech caused by stroke were assessed. The types of error committed by patients were analyzed both quantitatively and qualitatively, and frequencies compared. We observed the presence of substitution, omission, trial-and-error, repetition, self-correction, anticipation, addition, reiteration and metathesis, in descending order of frequency, respectively. Omission type errors were one of the most commonly occurring whereas addition errors were infrequent. These findings differed to those reported in English speaking patients, probably owing to differences in the methodologies used for classifying error types; the inclusion of speakers with apraxia secondary to aphasia; and the difference in the structure of Portuguese language to English in terms of syllable onset complexity and effect on motor control. The frequency of omission and addition errors observed differed to the frequency reported for speakers of English.

  2. Headaches associated with refractive errors: myth or reality?

    PubMed

    Gil-Gouveia, R; Martins, I P

    2002-04-01

    Headache and refractive errors are very common conditions in the general population, and those with headache often attribute their pain to a visual problem. The International Headache Society (IHS) criteria for the classification of headache includes an entity of headache associated with refractive errors (HARE), but indicates that its importance is widely overestimated. To compare overall headache frequency and HARE frequency in healthy subjects with uncorrected or miscorrected refractive errors and a control group. We interviewed 105 individuals with uncorrected refractive errors and a control group of 71 subjects (with properly corrected or without refractive errors) regarding their headache history. We compared the occurrence of headache and its diagnosis in both groups and assessed its relation to their habits of visual effort and type of refractive errors. Headache frequency was similar in both subjects and controls. Headache associated with refractive errors was the only headache type significantly more common in subjects with refractive errors than in controls (6.7% versus 0%). It was associated with hyperopia and was unrelated to visual effort or to the severity of visual error. With adequate correction, 72.5% of the subjects with headache and refractive error reported improvement in their headaches, and 38% had complete remission of headache. Regardless of the type of headache present, headache frequency was significantly reduced in these subjects (t = 2.34, P =.02). Headache associated with refractive errors was rarely identified in individuals with refractive errors. In those with chronic headache, proper correction of refractive errors significantly improved headache complaints and did so primarily by decreasing the frequency of headache episodes.

  3. Neyman-Pearson classification algorithms and NP receiver operating characteristics

    PubMed Central

    Tong, Xin; Feng, Yang; Li, Jingyi Jessica

    2018-01-01

    In many binary classification applications, such as disease diagnosis and spam detection, practitioners commonly face the need to limit type I error (that is, the conditional probability of misclassifying a class 0 observation as class 1) so that it remains below a desired threshold. To address this need, the Neyman-Pearson (NP) classification paradigm is a natural choice; it minimizes type II error (that is, the conditional probability of misclassifying a class 1 observation as class 0) while enforcing an upper bound, α, on the type I error. Despite its century-long history in hypothesis testing, the NP paradigm has not been well recognized and implemented in classification schemes. Common practices that directly limit the empirical type I error to no more than α do not satisfy the type I error control objective because the resulting classifiers are likely to have type I errors much larger than α, and the NP paradigm has not been properly implemented in practice. We develop the first umbrella algorithm that implements the NP paradigm for all scoring-type classification methods, such as logistic regression, support vector machines, and random forests. Powered by this algorithm, we propose a novel graphical tool for NP classification methods: NP receiver operating characteristic (NP-ROC) bands motivated by the popular ROC curves. NP-ROC bands will help choose α in a data-adaptive way and compare different NP classifiers. We demonstrate the use and properties of the NP umbrella algorithm and NP-ROC bands, available in the R package nproc, through simulation and real data studies. PMID:29423442

  4. Neyman-Pearson classification algorithms and NP receiver operating characteristics.

    PubMed

    Tong, Xin; Feng, Yang; Li, Jingyi Jessica

    2018-02-01

    In many binary classification applications, such as disease diagnosis and spam detection, practitioners commonly face the need to limit type I error (that is, the conditional probability of misclassifying a class 0 observation as class 1) so that it remains below a desired threshold. To address this need, the Neyman-Pearson (NP) classification paradigm is a natural choice; it minimizes type II error (that is, the conditional probability of misclassifying a class 1 observation as class 0) while enforcing an upper bound, α, on the type I error. Despite its century-long history in hypothesis testing, the NP paradigm has not been well recognized and implemented in classification schemes. Common practices that directly limit the empirical type I error to no more than α do not satisfy the type I error control objective because the resulting classifiers are likely to have type I errors much larger than α, and the NP paradigm has not been properly implemented in practice. We develop the first umbrella algorithm that implements the NP paradigm for all scoring-type classification methods, such as logistic regression, support vector machines, and random forests. Powered by this algorithm, we propose a novel graphical tool for NP classification methods: NP receiver operating characteristic (NP-ROC) bands motivated by the popular ROC curves. NP-ROC bands will help choose α in a data-adaptive way and compare different NP classifiers. We demonstrate the use and properties of the NP umbrella algorithm and NP-ROC bands, available in the R package nproc, through simulation and real data studies.

  5. Syntactic and semantic errors in radiology reports associated with speech recognition software.

    PubMed

    Ringler, Michael D; Goss, Brian C; Bartholmai, Brian J

    2017-03-01

    Speech recognition software can increase the frequency of errors in radiology reports, which may affect patient care. We retrieved 213,977 speech recognition software-generated reports from 147 different radiologists and proofread them for errors. Errors were classified as "material" if they were believed to alter interpretation of the report. "Immaterial" errors were subclassified as intrusion/omission or spelling errors. The proportion of errors and error type were compared among individual radiologists, imaging subspecialty, and time periods. In all, 20,759 reports (9.7%) contained errors, of which 3992 (1.9%) were material errors. Among immaterial errors, spelling errors were more common than intrusion/omission errors ( p < .001). Proportion of errors and fraction of material errors varied significantly among radiologists and between imaging subspecialties ( p < .001). Errors were more common in cross-sectional reports, reports reinterpreting results of outside examinations, and procedural studies (all p < .001). Error rate decreased over time ( p < .001), which suggests that a quality control program with regular feedback may reduce errors.

  6. Association between workarounds and medication administration errors in bar-code-assisted medication administration in hospitals.

    PubMed

    van der Veen, Willem; van den Bemt, Patricia M L A; Wouters, Hans; Bates, David W; Twisk, Jos W R; de Gier, Johan J; Taxis, Katja; Duyvendak, Michiel; Luttikhuis, Karen Oude; Ros, Johannes J W; Vasbinder, Erwin C; Atrafi, Maryam; Brasse, Bjorn; Mangelaars, Iris

    2018-04-01

    To study the association of workarounds with medication administration errors using barcode-assisted medication administration (BCMA), and to determine the frequency and types of workarounds and medication administration errors. A prospective observational study in Dutch hospitals using BCMA to administer medication. Direct observation was used to collect data. Primary outcome measure was the proportion of medication administrations with one or more medication administration errors. Secondary outcome was the frequency and types of workarounds and medication administration errors. Univariate and multivariate multilevel logistic regression analysis were used to assess the association between workarounds and medication administration errors. Descriptive statistics were used for the secondary outcomes. We included 5793 medication administrations for 1230 inpatients. Workarounds were associated with medication administration errors (adjusted odds ratio 3.06 [95% CI: 2.49-3.78]). Most commonly, procedural workarounds were observed, such as not scanning at all (36%), not scanning patients because they did not wear a wristband (28%), incorrect medication scanning, multiple medication scanning, and ignoring alert signals (11%). Common types of medication administration errors were omissions (78%), administration of non-ordered drugs (8.0%), and wrong doses given (6.0%). Workarounds are associated with medication administration errors in hospitals using BCMA. These data suggest that BCMA needs more post-implementation evaluation if it is to achieve the intended benefits for medication safety. In hospitals using barcode-assisted medication administration, workarounds occurred in 66% of medication administrations and were associated with large numbers of medication administration errors.

  7. Medication Errors in Patients with Enteral Feeding Tubes in the Intensive Care Unit.

    PubMed

    Sohrevardi, Seyed Mojtaba; Jarahzadeh, Mohammad Hossein; Mirzaei, Ehsan; Mirjalili, Mahtabalsadat; Tafti, Arefeh Dehghani; Heydari, Behrooz

    2017-01-01

    Most patients admitted to Intensive Care Units (ICU) have problems in using oral medication or ingesting solid forms of drugs. Selecting the most suitable dosage form in such patients is a challenge. The current study was conducted to assess the frequency and types of errors of oral medication administration in patients with enteral feeding tubes or suffering swallowing problems. A cross-sectional study was performed in the ICU of Shahid Sadoughi Hospital, Yazd, Iran. Patients were assessed for the incidence and types of medication errors occurring in the process of preparation and administration of oral medicines. Ninety-four patients were involved in this study and 10,250 administrations were observed. Totally, 4753 errors occurred among the studied patients. The most commonly used drugs were pantoprazole tablet, piracetam syrup, and losartan tablet. A total of 128 different types of drugs and nine different oral pharmaceutical preparations were prescribed for the patients. Forty-one (35.34%) out of 116 different solid drugs (except effervescent tablets and powders) could be substituted by liquid or injectable forms. The most common error was the wrong time of administration. Errors of wrong dose preparation and administration accounted for 24.04% and 25.31% of all errors, respectively. In this study, at least three-fourth of the patients experienced medication errors. The occurrence of these errors can greatly impair the quality of the patients' pharmacotherapy, and more attention should be paid to this issue.

  8. Maximum inflation of the type 1 error rate when sample size and allocation rate are adapted in a pre-planned interim look.

    PubMed

    Graf, Alexandra C; Bauer, Peter

    2011-06-30

    We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.

  9. Medication errors as malpractice-a qualitative content analysis of 585 medication errors by nurses in Sweden.

    PubMed

    Björkstén, Karin Sparring; Bergqvist, Monica; Andersén-Karlsson, Eva; Benson, Lina; Ulfvarson, Johanna

    2016-08-24

    Many studies address the prevalence of medication errors but few address medication errors serious enough to be regarded as malpractice. Other studies have analyzed the individual and system contributory factor leading to a medication error. Nurses have a key role in medication administration, and there are contradictory reports on the nurses' work experience in relation to the risk and type for medication errors. All medication errors where a nurse was held responsible for malpractice (n = 585) during 11 years in Sweden were included. A qualitative content analysis and classification according to the type and the individual and system contributory factors was made. In order to test for possible differences between nurses' work experience and associations within and between the errors and contributory factors, Fisher's exact test was used, and Cohen's kappa (k) was performed to estimate the magnitude and direction of the associations. There were a total of 613 medication errors in the 585 cases, the most common being "Wrong dose" (41 %), "Wrong patient" (13 %) and "Omission of drug" (12 %). In 95 % of the cases, an average of 1.4 individual contributory factors was found; the most common being "Negligence, forgetfulness or lack of attentiveness" (68 %), "Proper protocol not followed" (25 %), "Lack of knowledge" (13 %) and "Practice beyond scope" (12 %). In 78 % of the cases, an average of 1.7 system contributory factors was found; the most common being "Role overload" (36 %), "Unclear communication or orders" (30 %) and "Lack of adequate access to guidelines or unclear organisational routines" (30 %). The errors "Wrong patient due to mix-up of patients" and "Wrong route" and the contributory factors "Lack of knowledge" and "Negligence, forgetfulness or lack of attentiveness" were more common in less experienced nurses. The experienced nurses were more prone to "Practice beyond scope of practice" and to make errors in spite of "Lack of adequate access to guidelines or unclear organisational routines". Medication errors regarded as malpractice in Sweden were of the same character as medication errors worldwide. A complex interplay between individual and system factors often contributed to the errors.

  10. Refractive errors in Mercyland Specialist Hospital, Osogbo, Western Nigeria.

    PubMed

    Adeoti, C O; Egbewale, B E

    2008-06-01

    The study was conducted to determine the magnitude and pattern of refractive errors in order to provide facilities for its management. A prospective study of 3601 eyes of 1824 consective patients was conducted. Information obtained included age, sex, occupation, visual acuity, type and degree of refractive error. The data was analysed using Statistical Package for Social Sciences 11.0 version) Computer Software. Refractive error was found in 1824(53.71%) patients. There were 832(45.61%) males and 992(54.39%) females with a mean age of 35.55. Myopia was the commonest (1412(39.21% eyes). Others include hypermetropia (840(23.33% eyes), astigmatism (785(21.80%) and 820 patients (1640 eyes) had presbyopia. Anisometropia was present in 791(44.51%) of 1777 patients that had bilateral refractive errors. Two thousand two hundred and fifty two eyes has spherical errors. Out of 2252 eyes with spherical errors, 1308 eyes (58.08%) had errors -0.50 to +0.50 dioptres, 567 eyes (25.18%) had errors less than -0.50 dioptres of whom 63 eyes (2.80%) had errors less than -5.00 dioptres while 377 eyes (16.74%) had errors greater than +0.50 dioptres of whom 81 eyes (3.60%) had errors greater than +2.00 dioptres. The highest error was 20.00 dioptres for myopia and 18.00 dioptres for hypermetropia. Refractive error is common in this environment. Adequate provision should be made for its correction bearing in mind the common types and degrees.

  11. A circadian rhythm in skill-based errors in aviation maintenance.

    PubMed

    Hobbs, Alan; Williamson, Ann; Van Dongen, Hans P A

    2010-07-01

    In workplaces where activity continues around the clock, human error has been observed to exhibit a circadian rhythm, with a characteristic peak in the early hours of the morning. Errors are commonly distinguished by the nature of the underlying cognitive failure, particularly the level of intentionality involved in the erroneous action. The Skill-Rule-Knowledge (SRK) framework of Rasmussen is used widely in the study of industrial errors and accidents. The SRK framework describes three fundamental types of error, according to whether behavior is under the control of practiced sensori-motor skill routines with minimal conscious awareness; is guided by implicit or explicit rules or expertise; or where the planning of actions requires the conscious application of domain knowledge. Up to now, examinations of circadian patterns of industrial errors have not distinguished between different types of error. Consequently, it is not clear whether all types of error exhibit the same circadian rhythm. A survey was distributed to aircraft maintenance personnel in Australia. Personnel were invited to anonymously report a safety incident and were prompted to describe, in detail, the human involvement (if any) that contributed to it. A total of 402 airline maintenance personnel reported an incident, providing 369 descriptions of human error in which the time of the incident was reported and sufficient detail was available to analyze the error. Errors were categorized using a modified version of the SRK framework, in which errors are categorized as skill-based, rule-based, or knowledge-based, or as procedure violations. An independent check confirmed that the SRK framework had been applied with sufficient consistency and reliability. Skill-based errors were the most common form of error, followed by procedure violations, rule-based errors, and knowledge-based errors. The frequency of errors was adjusted for the estimated proportion of workers present at work/each hour of the day, and the 24 h pattern of each error type was examined. Skill-based errors exhibited a significant circadian rhythm, being most prevalent in the early hours of the morning. Variation in the frequency of rule-based errors, knowledge-based errors, and procedure violations over the 24 h did not reach statistical significance. The results suggest that during the early hours of the morning, maintenance technicians are at heightened risk of "absent minded" errors involving failures to execute action plans as intended.

  12. Robust Linear Models for Cis-eQTL Analysis.

    PubMed

    Rantalainen, Mattias; Lindgren, Cecilia M; Holmes, Christopher C

    2015-01-01

    Expression Quantitative Trait Loci (eQTL) analysis enables characterisation of functional genetic variation influencing expression levels of individual genes. In outbread populations, including humans, eQTLs are commonly analysed using the conventional linear model, adjusting for relevant covariates, assuming an allelic dosage model and a Gaussian error term. However, gene expression data generally have noise that induces heavy-tailed errors relative to the Gaussian distribution and often include atypical observations, or outliers. Such departures from modelling assumptions can lead to an increased rate of type II errors (false negatives), and to some extent also type I errors (false positives). Careful model checking can reduce the risk of type-I errors but often not type II errors, since it is generally too time-consuming to carefully check all models with a non-significant effect in large-scale and genome-wide studies. Here we propose the application of a robust linear model for eQTL analysis to reduce adverse effects of deviations from the assumption of Gaussian residuals. We present results from a simulation study as well as results from the analysis of real eQTL data sets. Our findings suggest that in many situations robust models have the potential to provide more reliable eQTL results compared to conventional linear models, particularly in respect to reducing type II errors due to non-Gaussian noise. Post-genomic data, such as that generated in genome-wide eQTL studies, are often noisy and frequently contain atypical observations. Robust statistical models have the potential to provide more reliable results and increased statistical power under non-Gaussian conditions. The results presented here suggest that robust models should be considered routinely alongside other commonly used methodologies for eQTL analysis.

  13. Addressing Common Student Technical Errors in Field Data Collection: An Analysis of a Citizen-Science Monitoring Project.

    PubMed

    Philippoff, Joanna; Baumgartner, Erin

    2016-03-01

    The scientific value of citizen-science programs is limited when the data gathered are inconsistent, erroneous, or otherwise unusable. Long-term monitoring studies, such as Our Project In Hawai'i's Intertidal (OPIHI), have clear and consistent procedures and are thus a good model for evaluating the quality of participant data. The purpose of this study was to examine the kinds of errors made by student researchers during OPIHI data collection and factors that increase or decrease the likelihood of these errors. Twenty-four different types of errors were grouped into four broad error categories: missing data, sloppiness, methodological errors, and misidentification errors. "Sloppiness" was the most prevalent error type. Error rates decreased with field trip experience and student age. We suggest strategies to reduce data collection errors applicable to many types of citizen-science projects including emphasizing neat data collection, explicitly addressing and discussing the problems of falsifying data, emphasizing the importance of using standard scientific vocabulary, and giving participants multiple opportunities to practice to build their data collection techniques and skills.

  14. First order error corrections in common introductory physics experiments

    NASA Astrophysics Data System (ADS)

    Beckey, Jacob; Baker, Andrew; Aravind, Vasudeva; Clarion Team

    As a part of introductory physics courses, students perform different standard lab experiments. Almost all of these experiments are prone to errors owing to factors like friction, misalignment of equipment, air drag, etc. Usually these types of errors are ignored by students and not much thought is paid to the source of these errors. However, paying attention to these factors that give rise to errors help students make better physics models and understand physical phenomena behind experiments in more detail. In this work, we explore common causes of errors in introductory physics experiment and suggest changes that will mitigate the errors, or suggest models that take the sources of these errors into consideration. This work helps students build better and refined physical models and understand physics concepts in greater detail. We thank Clarion University undergraduate student grant for financial support involving this project.

  15. Role of Grammatical Gender and Semantics in German Word Production

    ERIC Educational Resources Information Center

    Vigliocco, Gabriella; Vinson, David P.; Indefrey, Peter; Levelt, Willem J. M.; Hellwig, Frauke

    2004-01-01

    Semantic substitution errors (e.g., saying "arm" when "leg" is intended) are among the most common types of errors occurring during spontaneous speech. It has been shown that grammatical gender of German target nouns is preserved in the errors (E. Mane, 1999). In 3 experiments, the authors explored different accounts of the grammatical gender…

  16. How Helpful Are Error Management and Counterfactual Thinking Instructions to Inexperienced Spreadsheet Users' Training Task Performance?

    ERIC Educational Resources Information Center

    Caputi, Peter; Chan, Amy; Jayasuriya, Rohan

    2011-01-01

    This paper examined the impact of training strategies on the types of errors that novice users make when learning a commonly used spreadsheet application. Fifty participants were assigned to a counterfactual thinking training (CFT) strategy, an error management training strategy, or a combination of both strategies, and completed an easy task…

  17. Refractive errors in presbyopic patients in Kano, Nigeria.

    PubMed

    Lawan, Abdu; Okpo, Eme; Philips, Ebisike

    2014-01-01

    The study is a retrospective review of the pattern of refractive errors in presbyopic patients seen in the eye clinic from January to December, 2009. The clinic refraction register was used to retrieve the case folders of all patients refracted during the review period. Information extracted includes patient's age, sex, and types of refractive error. Unaided and pin hole visual acuity was done with Snellen's or "E" Charts and near vision with Jaeger's chart in English or Hausa. All patients had basic eye examination and streak retinoscopy at two third meter working distance. The final subjective refractive correction given to the patients was used to categorize the type of refractive error. There were 5893 patients, 1584 had refractive error and 644 were presbyopic. There were 289 males and 355 females (M:F= 1:1.2). Presbyopia accounted for 10.9% of clinic attendance and 40% of patients with refractive error. Presbyopia was seen in 17%, the remaining 83% required distance correction; astigmatism was seen in 41%, hypermetropia 29%, myopia 9% and aphakia 4%. Refractive error was commoner in females than males and the relationship was statistically significant (P-value = 0.017; P < 0.05 considered significant). Presbyopia is common and most of the patients had other refractive errors. Full refraction is advised for all patients.

  18. Errors in fluid therapy in medical wards.

    PubMed

    Mousavi, Maryam; Khalili, Hossein; Dashti-Khavidaki, Simin

    2012-04-01

    Intravenous fluid therapy remains an essential part of patients' care during hospitalization. There are only few studies that focused on fluid therapy in the hospitalized patients, and there is not any consensus statement about fluid therapy in patients who are hospitalized in medical wards. The aim of the present study was to assess intravenous fluid therapy status and related errors in the patients during the course of hospitalization in the infectious diseases wards of a referral teaching hospital. This study was conducted in the infectious diseases wards of Imam Khomeini Complex Hospital, Tehran, Iran. During a retrospective study, data related to intravenous fluid therapy were collected by two clinical pharmacists of infectious diseases from 2008 to 2010. Intravenous fluid therapy information including indication, type, volume and rate of fluid administration was recorded for each patient. An internal protocol for intravenous fluid therapy was designed based on literature review and available recommendations. The data related to patients' fluid therapy were compared with this protocol. The fluid therapy was considered appropriate if it was compatible with the protocol regarding indication of intravenous fluid therapy, type, electrolyte content and rate of fluid administration. Any mistake in the selection of fluid type, content, volume and rate of administration was considered as intravenous fluid therapy errors. Five hundred and ninety-six of medication errors were detected during the study period in the patients. Overall rate of fluid therapy errors was 1.3 numbers per patient during hospitalization. Errors in the rate of fluid administration (29.8%), incorrect fluid volume calculation (26.5%) and incorrect type of fluid selection (24.6%) were the most common types of errors. The patients' male sex, old age, baseline renal diseases, diabetes co-morbidity, and hospitalization due to endocarditis, HIV infection and sepsis are predisposing factors for the occurrence of fluid therapy errors in the patients. Our result showed that intravenous fluid therapy errors occurred commonly in the hospitalized patients especially in the medical wards. Improvement in knowledge and attention of health-care workers about these errors are essential for preventing of medication errors in aspect of fluid therapy.

  19. Technological Advancements and Error Rates in Radiation Therapy Delivery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Margalit, Danielle N., E-mail: dmargalit@partners.org; Harvard Cancer Consortium and Brigham and Women's Hospital/Dana Farber Cancer Institute, Boston, MA; Chen, Yu-Hui

    2011-11-15

    Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)-conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system atmore » Brigham and Women's Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher's exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01-0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08-0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique. There was a lower error rate with IMRT compared with 3D/conventional RT, highlighting the need for sustained vigilance against errors common to more traditional treatment techniques.« less

  20. Analysis of Statistical Methods and Errors in the Articles Published in the Korean Journal of Pain

    PubMed Central

    Yim, Kyoung Hoon; Han, Kyoung Ah; Park, Soo Young

    2010-01-01

    Background Statistical analysis is essential in regard to obtaining objective reliability for medical research. However, medical researchers do not have enough statistical knowledge to properly analyze their study data. To help understand and potentially alleviate this problem, we have analyzed the statistical methods and errors of articles published in the Korean Journal of Pain (KJP), with the intention to improve the statistical quality of the journal. Methods All the articles, except case reports and editorials, published from 2004 to 2008 in the KJP were reviewed. The types of applied statistical methods and errors in the articles were evaluated. Results One hundred and thirty-nine original articles were reviewed. Inferential statistics and descriptive statistics were used in 119 papers and 20 papers, respectively. Only 20.9% of the papers were free from statistical errors. The most commonly adopted statistical method was the t-test (21.0%) followed by the chi-square test (15.9%). Errors of omission were encountered 101 times in 70 papers. Among the errors of omission, "no statistics used even though statistical methods were required" was the most common (40.6%). The errors of commission were encountered 165 times in 86 papers, among which "parametric inference for nonparametric data" was the most common (33.9%). Conclusions We found various types of statistical errors in the articles published in the KJP. This suggests that meticulous attention should be given not only in the applying statistical procedures but also in the reviewing process to improve the value of the article. PMID:20552071

  1. On low-frequency errors of uniformly modulated filtered white-noise models for ground motions

    USGS Publications Warehouse

    Safak, Erdal; Boore, David M.

    1988-01-01

    Low-frequency errors of a commonly used non-stationary stochastic model (uniformly modulated filtered white-noise model) for earthquake ground motions are investigated. It is shown both analytically and by numerical simulation that uniformly modulated filter white-noise-type models systematically overestimate the spectral response for periods longer than the effective duration of the earthquake, because of the built-in low-frequency errors in the model. The errors, which are significant for low-magnitude short-duration earthquakes, can be eliminated by using the filtered shot-noise-type models (i. e. white noise, modulated by the envelope first, and then filtered).

  2. Type I and Type II error concerns in fMRI research: re-balancing the scale

    PubMed Central

    Cunningham, William A.

    2009-01-01

    Statistical thresholding (i.e. P-values) in fMRI research has become increasingly conservative over the past decade in an attempt to diminish Type I errors (i.e. false alarms) to a level traditionally allowed in behavioral science research. In this article, we examine the unintended negative consequences of this single-minded devotion to Type I errors: increased Type II errors (i.e. missing true effects), a bias toward studying large rather than small effects, a bias toward observing sensory and motor processes rather than complex cognitive and affective processes and deficient meta-analyses. Power analyses indicate that the reductions in acceptable P-values over time are producing dramatic increases in the Type II error rate. Moreover, the push for a mapwide false discovery rate (FDR) of 0.05 is based on the assumption that this is the FDR in most behavioral research; however, this is an inaccurate assessment of the conventions in actual behavioral research. We report simulations demonstrating that combined intensity and cluster size thresholds such as P < 0.005 with a 10 voxel extent produce a desirable balance between Types I and II error rates. This joint threshold produces high but acceptable Type II error rates and produces a FDR that is comparable to the effective FDR in typical behavioral science articles (while a 20 voxel extent threshold produces an actual FDR of 0.05 with relatively common imaging parameters). We recommend a greater focus on replication and meta-analysis rather than emphasizing single studies as the unit of analysis for establishing scientific truth. From this perspective, Type I errors are self-erasing because they will not replicate, thus allowing for more lenient thresholding to avoid Type II errors. PMID:20035017

  3. Review of medication errors that are new or likely to occur more frequently with electronic medication management systems.

    PubMed

    Van de Vreede, Melita; McGrath, Anne; de Clifford, Jan

    2018-05-14

    Objective. The aim of the present study was to identify and quantify medication errors reportedly related to electronic medication management systems (eMMS) and those considered likely to occur more frequently with eMMS. This included developing a new classification system relevant to eMMS errors. Methods. Eight Victorian hospitals with eMMS participated in a retrospective audit of reported medication incidents from their incident reporting databases between May and July 2014. Site-appointed project officers submitted deidentified incidents they deemed new or likely to occur more frequently due to eMMS, together with the Incident Severity Rating (ISR). The authors reviewed and classified incidents. Results. There were 5826 medication-related incidents reported. In total, 93 (47 prescribing errors, 46 administration errors) were identified as new or potentially related to eMMS. Only one ISR2 (moderate) and no ISR1 (severe or death) errors were reported, so harm to patients in this 3-month period was minimal. The most commonly reported error types were 'human factors' and 'unfamiliarity or training' (70%) and 'cross-encounter or hybrid system errors' (22%). Conclusions. Although the results suggest that the errors reported were of low severity, organisations must remain vigilant to the risk of new errors and avoid the assumption that eMMS is the panacea to all medication error issues. What is known about the topic? eMMS have been shown to reduce some types of medication errors, but it has been reported that some new medication errors have been identified and some are likely to occur more frequently with eMMS. There are few published Australian studies that have reported on medication error types that are likely to occur more frequently with eMMS in more than one organisation and that include administration and prescribing errors. What does this paper add? This paper includes a new simple classification system for eMMS that is useful and outlines the most commonly reported incident types and can inform organisations and vendors on possible eMMS improvements. The paper suggests a new classification system for eMMS medication errors. What are the implications for practitioners? The results of the present study will highlight to organisations the need for ongoing review of system design, refinement of workflow issues, staff education and training and reporting and monitoring of errors.

  4. How common are cognitive errors in cases presented at emergency medicine resident morbidity and mortality conferences?

    PubMed

    Chu, David; Xiao, Jane; Shah, Payal; Todd, Brett

    2018-06-20

    Cognitive errors are a major contributor to medical error. Traditionally, medical errors at teaching hospitals are analyzed in morbidity and mortality (M&M) conferences. We aimed to describe the frequency of cognitive errors in relation to the occurrence of diagnostic and other error types, in cases presented at an emergency medicine (EM) resident M&M conference. We conducted a retrospective study of all cases presented at a suburban US EM residency monthly M&M conference from September 2011 to August 2016. Each case was reviewed using the electronic medical record (EMR) and notes from the M&M case by two EM physicians. Each case was categorized by type of primary medical error that occurred as described by Okafor et al. When a diagnostic error occurred, the case was reviewed for contributing cognitive and non-cognitive factors. Finally, when a cognitive error occurred, the case was classified into faulty knowledge, faulty data gathering or faulty synthesis, as described by Graber et al. Disagreements in error type were mediated by a third EM physician. A total of 87 M&M cases were reviewed; the two reviewers agreed on 73 cases, and 14 cases required mediation by a third reviewer. Forty-eight cases involved diagnostic errors, 47 of which were cognitive errors. Of these 47 cases, 38 involved faulty synthesis, 22 involved faulty data gathering and only 11 involved faulty knowledge. Twenty cases contained more than one type of cognitive error. Twenty-nine cases involved both a resident and an attending physician, while 17 cases involved only an attending physician. Twenty-one percent of the resident cases involved all three cognitive errors, while none of the attending cases involved all three. Forty-one percent of the resident cases and only 6% of the attending cases involved faulty knowledge. One hundred percent of the resident cases and 94% of the attending cases involved faulty synthesis. Our review of 87 EM M&M cases revealed that cognitive errors are commonly involved in cases presented, and that these errors are less likely due to deficient knowledge and more likely due to faulty synthesis. M&M conferences may therefore provide an excellent forum to discuss cognitive errors and how to reduce their occurrence.

  5. Effects of hemisphere speech dominance and seizure focus on patterns of behavioral response errors for three types of stimuli.

    PubMed

    Rausch, R; MacDonald, K

    1997-03-01

    We used a protocol consisting of a continuous presentation of stimuli with associated response requests during an intracarotid sodium amobarbital procedure (IAP) to study the effects of hemisphere injected (speech dominant vs. nondominant) and seizure focus (left temporal lobe vs. right temporal lobe) on the pattern of behavioral response errors for three types of visual stimuli (pictures of common objects, words, and abstract forms). Injection of the left speech dominant hemisphere compared to the right nondominant hemisphere increased overall errors and affected the pattern of behavioral errors. The presence of a seizure focus in the contralateral hemisphere increased overall errors, particularly for the right temporal lobe seizure patients, but did not affect the pattern of behavioral errors. Left hemisphere injections disrupted both naming and reading responses at a rate similar to that of matching-to-sample performance. Also, a short-term memory deficit was observed with all three stimuli. Long-term memory testing following the left hemisphere injection indicated that only for pictures of common objects were there fewer errors during the early postinjection period than for the later long-term memory testing. Therefore, despite the inability to respond to picture stimuli, picture items, but not words or forms, could be sufficiently encoded for later recall. In contrast, right hemisphere injections resulted in few errors, with a pattern suggesting a mild general cognitive decrease. A selective weakness in learning unfamiliar forms was found. Our findings indicate that different patterns of behavioral deficits occur following the left vs. right hemisphere injections, with selective patterns specific to stimulus type.

  6. Endodontic Procedural Errors: Frequency, Type of Error, and the Most Frequently Treated Tooth.

    PubMed

    Yousuf, Waqas; Khan, Moiz; Mehdi, Hasan

    2015-01-01

    Introduction. The aim of this study is to determine the most common endodontically treated tooth and the most common error produced during treatment and to note the association of particular errors with particular teeth. Material and Methods. Periapical radiographs were taken of all the included teeth and were stored and assessed using DIGORA Optime. Teeth in each group were evaluated for presence or absence of procedural errors (i.e., overfill, underfill, ledge formation, perforations, apical transportation, and/or instrument separation) and the most frequent tooth to undergo endodontic treatment was also noted. Results. A total of 1748 root canal treated teeth were assessed, out of which 574 (32.8%) contained a procedural error. Out of these 397 (22.7%) were overfilled, 155 (8.9%) were underfilled, 16 (0.9%) had instrument separation, and 7 (0.4%) had apical transportation. The most frequently treated tooth was right permanent mandibular first molar (11.3%). The least commonly treated teeth were the permanent mandibular third molars (0.1%). Conclusion. Practitioners should show greater care to maintain accuracy of the working length throughout the procedure, as errors in length accounted for the vast majority of errors and special care should be taken when working on molars.

  7. Residents' numeric inputting error in computerized physician order entry prescription.

    PubMed

    Wu, Xue; Wu, Changxu; Zhang, Kan; Wei, Dong

    2016-04-01

    Computerized physician order entry (CPOE) system with embedded clinical decision support (CDS) can significantly reduce certain types of prescription error. However, prescription errors still occur. Various factors such as the numeric inputting methods in human computer interaction (HCI) produce different error rates and types, but has received relatively little attention. This study aimed to examine the effects of numeric inputting methods and urgency levels on numeric inputting errors of prescription, as well as categorize the types of errors. Thirty residents participated in four prescribing tasks in which two factors were manipulated: numeric inputting methods (numeric row in the main keyboard vs. numeric keypad) and urgency levels (urgent situation vs. non-urgent situation). Multiple aspects of participants' prescribing behavior were measured in sober prescribing situations. The results revealed that in urgent situations, participants were prone to make mistakes when using the numeric row in the main keyboard. With control of performance in the sober prescribing situation, the effects of the input methods disappeared, and urgency was found to play a significant role in the generalized linear model. Most errors were either omission or substitution types, but the proportion of transposition and intrusion error types were significantly higher than that of the previous research. Among numbers 3, 8, and 9, which were the less common digits used in prescription, the error rate was higher, which was a great risk to patient safety. Urgency played a more important role in CPOE numeric typing error-making than typing skills and typing habits. It was recommended that inputting with the numeric keypad had lower error rates in urgent situation. An alternative design could consider increasing the sensitivity of the keys with lower frequency of occurrence and decimals. To improve the usability of CPOE, numeric keyboard design and error detection could benefit from spatial incidence of errors found in this study. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. Issues with data and analyses: Errors, underlying themes, and potential solutions

    PubMed Central

    Allison, David B.

    2018-01-01

    Some aspects of science, taken at the broadest level, are universal in empirical research. These include collecting, analyzing, and reporting data. In each of these aspects, errors can and do occur. In this work, we first discuss the importance of focusing on statistical and data errors to continually improve the practice of science. We then describe underlying themes of the types of errors and postulate contributing factors. To do so, we describe a case series of relatively severe data and statistical errors coupled with surveys of some types of errors to better characterize the magnitude, frequency, and trends. Having examined these errors, we then discuss the consequences of specific errors or classes of errors. Finally, given the extracted themes, we discuss methodological, cultural, and system-level approaches to reducing the frequency of commonly observed errors. These approaches will plausibly contribute to the self-critical, self-correcting, ever-evolving practice of science, and ultimately to furthering knowledge. PMID:29531079

  9. Misrepresentations of evolutionary psychology in sex and gender textbooks.

    PubMed

    Winegard, Benjamin M; Winegard, Bo M; Deaner, Robert O

    2014-05-20

    Evolutionary psychology has provoked controversy, especially when applied to human sex differences. We hypothesize that this is partly due to misunderstandings of evolutionary psychology that are perpetuated by undergraduate sex and gender textbooks. As an initial test of this hypothesis, we develop a catalog of eight types of errors and document their occurrence in 15 widely used sex and gender textbooks. Consistent with our hypothesis, of the 12 textbooks that discussed evolutionary psychology, all contained at least one error, and the median number of errors was five. The most common types of errors were "Straw Man," "Biological Determinism," and "Species Selection." We conclude by suggesting improvements to undergraduate sex and gender textbooks.

  10. Errors in imaging of traumatic injuries.

    PubMed

    Scaglione, Mariano; Iaselli, Francesco; Sica, Giacomo; Feragalli, Beatrice; Nicola, Refky

    2015-10-01

    The advent of multi-detector computed tomography (MDCT) has drastically improved the outcomes of patients with multiple traumatic injuries. However, there are still diagnostic challenges to be considered. A missed or the delay of a diagnosis in trauma patients can sometimes be related to perception or other non-visual cues, while other errors are due to poor technique or poor image quality. In order to avoid any serious complications, it is important for the practicing radiologist to be cognizant of some of the most common types of errors. The objective of this article is to review the various types of errors in the evaluation of patients with multiple trauma injuries or polytrauma with MDCT.

  11. Does raising type 1 error rate improve power to detect interactions in linear regression models? A simulation study.

    PubMed

    Durand, Casey P

    2013-01-01

    Statistical interactions are a common component of data analysis across a broad range of scientific disciplines. However, the statistical power to detect interactions is often undesirably low. One solution is to elevate the Type 1 error rate so that important interactions are not missed in a low power situation. To date, no study has quantified the effects of this practice on power in a linear regression model. A Monte Carlo simulation study was performed. A continuous dependent variable was specified, along with three types of interactions: continuous variable by continuous variable; continuous by dichotomous; and dichotomous by dichotomous. For each of the three scenarios, the interaction effect sizes, sample sizes, and Type 1 error rate were varied, resulting in a total of 240 unique simulations. In general, power to detect the interaction effect was either so low or so high at α = 0.05 that raising the Type 1 error rate only served to increase the probability of including a spurious interaction in the model. A small number of scenarios were identified in which an elevated Type 1 error rate may be justified. Routinely elevating Type 1 error rate when testing interaction effects is not an advisable practice. Researchers are best served by positing interaction effects a priori and accounting for them when conducting sample size calculations.

  12. A nationwide descriptive study of obstetric claims for compensation in Norway.

    PubMed

    Andreasen, Stine; Backe, Bjørn; Jørstad, Rolf Gunnar; Oian, Pål

    2012-10-01

    To describe causes of substandard care in obstetric compensation claims. A nationwide descriptive study in Norway. All obstetric patients who believed themselves inflicted with injuries by the Health Service and applying for compensation. Data were collected from 871 claims to The Norwegian System of Compensation to Patients during 1994-2008, of which 278 were awarded compensation. Type of injury and cause of substandard care. Of 871 cases, 278 (31.9%) resulted in compensation. Of those, asphyxia was the most common type of injury to the child (83.4%). Anal sphincter tear (29.9%) and infection (23.0%) were the most common types of injury to the mother. Human error, both by midwives (37.1% of all cases given compensation) and obstetricians (51.2%), was an important contributing factor in inadequate obstetric care. Neglecting signs of fetal distress (28.1%), more competent health workers not being called when appropriate (26.3%) and inadequate fetal monitoring (17.3%) were often observed. System errors such as time conflicts, neglecting written guidelines and poor organization of the department were infrequent causes of injury (8.3%). Fetal asphyxia is the most common reason for compensation, resulting in large financial expenses to society. Human error contributes to inadequate health care in 92% of obstetric compensation claims, although underlying system errors may also be present. © 2012 The Authors Acta Obstetricia et Gynecologica Scandinavica© 2012 Nordic Federation of Societies of Obstetrics and Gynecology.

  13. At the cross-roads: an on-road examination of driving errors at intersections.

    PubMed

    Young, Kristie L; Salmon, Paul M; Lenné, Michael G

    2013-09-01

    A significant proportion of road trauma occurs at intersections. Understanding the nature of driving errors at intersections therefore has the potential to lead to significant injury reductions. To further understand how the complexity of modern intersections shapes behaviour of these errors are compared to errors made mid-block, and the role of wider systems failures in intersection error causation is investigated in an on-road study. Twenty-five participants drove a pre-determined urban route incorporating 25 intersections. Two in-vehicle observers recorded the errors made while a range of other data was collected, including driver verbal protocols, video, driver eye glance behaviour and vehicle data (e.g., speed, braking and lane position). Participants also completed a post-trial cognitive task analysis interview. Participants were found to make 39 specific error types, with speeding violations the most common. Participants made significantly more errors at intersections compared to mid-block, with misjudgement, action and perceptual/observation errors more commonly observed at intersections. Traffic signal configuration was found to play a key role in intersection error causation, with drivers making more errors at partially signalised compared to fully signalised intersections. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Accounting for Relatedness in Family Based Genetic Association Studies

    PubMed Central

    McArdle, P.F.; O’Connell, J.R.; Pollin, T.I.; Baumgarten, M.; Shuldiner, A.R.; Peyser, P.A.; Mitchell, B.D.

    2007-01-01

    Objective Assess the differences in point estimates, power and type 1 error rates when accounting for and ignoring family structure in genetic tests of association. Methods We compare by simulation the performance of analytic models using variance components to account for family structure and regression models that ignore relatedness for a range of possible family based study designs (i.e., sib pairs vs. large sibships vs. nuclear families vs. extended families). Results Our analyses indicate that effect size estimates and power are not significantly affected by ignoring family structure. Type 1 error rates increase when family structure is ignored, as density of family structures increases, and as trait heritability increases. For discrete traits with moderate levels of heritability and across many common sampling designs, type 1 error rates rise from a nominal 0.05 to 0.11. Conclusion Ignoring family structure may be useful in screening although it comes at a cost of a increased type 1 error rate, the magnitude of which depends on trait heritability and pedigree configuration. PMID:17570925

  15. PREVALENCE OF REFRACTIVE ERRORS IN MADRASSA STUDENTS OF HARIPUR DISTRICT.

    PubMed

    Atta, Zoia; Arif, Abdus Salam; Ahmed, Iftikhar; Farooq, Umer

    2015-01-01

    Visual impairment due to refractive errors is one of the most common problems among school-age children and is the second leading cause of treatable blindness. The Right to Sight, a global initiative launched by a coalition of non-government organizations and the World Health Organization (WHO), aims to eliminate avoidable visual impairment and blindness at a global level. In order to achieve this goal it is important to know the prevalence of different refractive errors in a community. Children and teenagers are the most susceptible groups to be affected by refractive errors. So, this population needs to be screened for different types of refractive errors. The study was done with the objective to find the frequency of different types of refractive errors in students of madrassas between the ages of 5-20 years in Haripur. This cross sectional study was done with 300 students between ages of 5-20 years in Madrassas of Haripur. The students were screened for refractive errors and the types of the errors were noted. After screening for refractive errors-the glasses were prescribed to the students. Myopia being 52.6% was the most frequent refractive error in students, followed by hyperopia 28.4% and astigmatism 19%. This study showed that myopia is an important problem in madrassa population. Females and males are almost equally affected. Spectacle correction of refractive errors is the cheapest and easy solution of this problem.

  16. Policies on documentation and disciplinary action in hospital pharmacies after a medication error.

    PubMed

    Bauman, A N; Pedersen, C A; Schommer, J C; Griffith, N L

    2001-06-15

    Hospital pharmacies were surveyed about policies on medication error documentation and actions taken against pharmacists involved in an error. The survey was mailed to 500 randomly selected hospital pharmacy directors in the United States. Data were collected on the existence of medication error reporting policies, what types of errors were documented and how, and hospital demographics. The response rate was 28%. Virtually all of the hospitals had policies and procedures for medication error reporting. Most commonly, documentation of oral and written reprimand was placed in the personnel file of a pharmacist involved in an error. One sixth of respondents had no policy on documentation or disciplinary action in the event of an error. Approximately one fourth of respondents reported that suspension or termination had been used as a form of disciplinary action; legal action was rarely used. Many respondents said errors that caused harm (42%) or death (40%) to the patient were documented in the personnel file, but 34% of hospitals did not document errors in the personnel file regardless of error type. Nearly three fourths of respondents differentiated between errors caught and not caught before a medication leaves the pharmacy and between errors caught and not caught before administration to the patient. More emphasis is needed on documentation of medication errors in hospital pharmacies.

  17. How Prediction Errors Shape Perception, Attention, and Motivation

    PubMed Central

    den Ouden, Hanneke E. M.; Kok, Peter; de Lange, Floris P.

    2012-01-01

    Prediction errors (PE) are a central notion in theoretical models of reinforcement learning, perceptual inference, decision-making and cognition, and prediction error signals have been reported across a wide range of brain regions and experimental paradigms. Here, we will make an attempt to see the forest for the trees and consider the commonalities and differences of reported PE signals in light of recent suggestions that the computation of PE forms a fundamental mode of brain function. We discuss where different types of PE are encoded, how they are generated, and the different functional roles they fulfill. We suggest that while encoding of PE is a common computation across brain regions, the content and function of these error signals can be very different and are determined by the afferent and efferent connections within the neural circuitry in which they arise. PMID:23248610

  18. Comparison of students from private and public schools on the spelling performance.

    PubMed

    Silva, Nathane Sanches Marques; Crenitte, Patrícia Abreu Pinheiro

    2015-01-01

    To compare the spelling ability of schoolchildren from the fourth to sixth grades of the elementary schools in the private and public schools of Bauru, São Paulo, and to verify whether errors are overcome as studies progress and the hierarchy of errors as to how often they occur. A dictation was applied to 384 schoolchildren: 206 from the private schools: 74 were at the fourth grade, 65 at the fifth grade, and 67 at the sixth grade; and 178 from the public schools; 56 at the fourth grade, 63 at the fifth grade, and 59 at the sixth grade of elementary school. Student's t test was used. In comparison of total spelling errors score, difference was found among the fourth and sixth grades of the private and public schools. Spelling errors decreased as education progressed, and those related to language irregularities were more common. Spelling ability and performance of students from the private and public schools are not similar in the fourth and sixth grades, but it is in the fifth grade. Spelling errors are gradually overcome as education progresses; however, this overcome rate was considerable between the fourth and fifth grades in the public schools. Decrease in the types of spelling errors follows a hierarchy of categories: phoneme/grapheme conversion, simple contextual rules, complex contextual rules, and language irregularities. Finally, the most common type of spelling error found was that related to language irregularities.

  19. Prevalence of refraction errors and color blindness in heavy vehicle drivers.

    PubMed

    Erdoğan, Haydar; Ozdemir, Levent; Arslan, Seher; Cetin, Ilhan; Ozeç, Ayşe Vural; Cetinkaya, Selma; Sümer, Haldun

    2011-01-01

    To investigate the frequency of eye disorders in heavy vehicle drivers. A cross-sectional type study was conducted between November 2004 and September 2006 in 200 driver and 200 non-driver persons. A complete ophthalmologic examination was performed, including visual acuity, and dilated examination of the posterior segment. We used the auto refractometer for determining refractive errors. According to eye examination results, the prevalence of the refractive error was 21.5% and 31.3% in study and control groups respectively (P<0.05). The most common type of refraction error in the study group was myopic astigmatism (8.3%) while in the control group simple myopia (12.8%). Prevalence of dyschromatopsia in the rivers, control group and total group was 2.2%, 2.8% and 2.6% respectively. A considerably high number of drivers are in lack of optimal visual acuity. Refraction errors in drivers may impair the traffic security.

  20. Prevalence of refraction errors and color blindness in heavy vehicle drivers

    PubMed Central

    Erdoğan, Haydar; Özdemir, Levent; Arslan, Seher; Çetin, Ilhan; Özeç, Ayşe Vural; Çetinkaya, Selma; Sümer, Haldun

    2011-01-01

    AIM To investigate the frequency of eye disorders in heavy vehicle drivers. METHODS A cross-sectional type study was conducted between November 2004 and September 2006 in 200 driver and 200 non-driver persons. A complete ophthalmologic examination was performed, including visual acuity, and dilated examination of the posterior segment. We used the auto refractometer for determining refractive errors. RESULTS According to eye examination results, the prevalence of the refractive error was 21.5% and 31.3% in study and control groups respectively (P<0.05). The most common type of refraction error in the study group was myopic astigmatism (8.3%) while in the control group simple myopia (12.8%). Prevalence of dyschromatopsia in the rivers, control group and total group was 2.2%, 2.8% and 2.6% respectively. CONCLUSION A considerably high number of drivers are in lack of optimal visual acuity. Refraction errors in drivers may impair the traffic security. PMID:22553671

  1. Frequency and analysis of non-clinical errors made in radiology reports using the National Integrated Medical Imaging System voice recognition dictation software.

    PubMed

    Motyer, R E; Liddy, S; Torreggiani, W C; Buckley, O

    2016-11-01

    Voice recognition (VR) dictation of radiology reports has become the mainstay of reporting in many institutions worldwide. Despite benefit, such software is not without limitations, and transcription errors have been widely reported. Evaluate the frequency and nature of non-clinical transcription error using VR dictation software. Retrospective audit of 378 finalised radiology reports. Errors were counted and categorised by significance, error type and sub-type. Data regarding imaging modality, report length and dictation time was collected. 67 (17.72 %) reports contained ≥1 errors, with 7 (1.85 %) containing 'significant' and 9 (2.38 %) containing 'very significant' errors. A total of 90 errors were identified from the 378 reports analysed, with 74 (82.22 %) classified as 'insignificant', 7 (7.78 %) as 'significant', 9 (10 %) as 'very significant'. 68 (75.56 %) errors were 'spelling and grammar', 20 (22.22 %) 'missense' and 2 (2.22 %) 'nonsense'. 'Punctuation' error was most common sub-type, accounting for 27 errors (30 %). Complex imaging modalities had higher error rates per report and sentence. Computed tomography contained 0.040 errors per sentence compared to plain film with 0.030. Longer reports had a higher error rate, with reports >25 sentences containing an average of 1.23 errors per report compared to 0-5 sentences containing 0.09. These findings highlight the limitations of VR dictation software. While most error was deemed insignificant, there were occurrences of error with potential to alter report interpretation and patient management. Longer reports and reports on more complex imaging had higher error rates and this should be taken into account by the reporting radiologist.

  2. Statistically Controlling for Confounding Constructs Is Harder than You Think

    PubMed Central

    Westfall, Jacob; Yarkoni, Tal

    2016-01-01

    Social scientists often seek to demonstrate that a construct has incremental validity over and above other related constructs. However, these claims are typically supported by measurement-level models that fail to consider the effects of measurement (un)reliability. We use intuitive examples, Monte Carlo simulations, and a novel analytical framework to demonstrate that common strategies for establishing incremental construct validity using multiple regression analysis exhibit extremely high Type I error rates under parameter regimes common in many psychological domains. Counterintuitively, we find that error rates are highest—in some cases approaching 100%—when sample sizes are large and reliability is moderate. Our findings suggest that a potentially large proportion of incremental validity claims made in the literature are spurious. We present a web application (http://jakewestfall.org/ivy/) that readers can use to explore the statistical properties of these and other incremental validity arguments. We conclude by reviewing SEM-based statistical approaches that appropriately control the Type I error rate when attempting to establish incremental validity. PMID:27031707

  3. Emergency department discharge prescription errors in an academic medical center

    PubMed Central

    Belanger, April; Devine, Lauren T.; Lane, Aaron; Condren, Michelle E.

    2017-01-01

    This study described discharge prescription medication errors written for emergency department patients. This study used content analysis in a cross-sectional design to systematically categorize prescription errors found in a report of 1000 discharge prescriptions submitted in the electronic medical record in February 2015. Two pharmacy team members reviewed the discharge prescription list for errors. Open-ended data were coded by an additional rater for agreement on coding categories. Coding was based upon majority rule. Descriptive statistics were used to address the study objective. Categories evaluated were patient age, provider type, drug class, and type and time of error. The discharge prescription error rate out of 1000 prescriptions was 13.4%, with “incomplete or inadequate prescription” being the most commonly detected error (58.2%). The adult and pediatric error rates were 11.7% and 22.7%, respectively. The antibiotics reviewed had the highest number of errors. The highest within-class error rates were with antianginal medications, antiparasitic medications, antacids, appetite stimulants, and probiotics. Emergency medicine residents wrote the highest percentage of prescriptions (46.7%) and had an error rate of 9.2%. Residents of other specialties wrote 340 prescriptions and had an error rate of 20.9%. Errors occurred most often between 10:00 am and 6:00 pm. PMID:28405061

  4. Prevalence of medication errors in primary health care at Bahrain Defence Force Hospital – prescription-based study

    PubMed Central

    Aljasmi, Fatema; Almalood, Fatema

    2018-01-01

    Background One of the important activities that physicians – particularly general practitioners – perform is prescribing. It occurs in most health care facilities and especially in primary health care (PHC) settings. Objectives This study aims to determine what types of prescribing errors are made in PHC at Bahrain Defence Force (BDF) Hospital, and how common they are. Methods This was a retrospective study of data from PHC at BDF Hospital. The data consisted of 379 prescriptions randomly selected from the pharmacy between March and May 2013, and errors in the prescriptions were classified into five types: major omission, minor omission, commission, integration, and skill-related errors. Results Of the total prescriptions, 54.4% (N=206) were given to male patients and 45.6% (N=173) to female patients; 24.8% were given to patients under the age of 10 years. On average, there were 2.6 drugs per prescription. In the prescriptions, 8.7% of drugs were prescribed by their generic names, and 28% (N=106) of prescriptions included an antibiotic. Out of the 379 prescriptions, 228 had an error, and 44.3% (N=439) of the 992 prescribed drugs contained errors. The proportions of errors were as follows: 9.9% (N=38) were minor omission errors; 73.6% (N=323) were major omission errors; 9.3% (N=41) were commission errors; and 17.1% (N=75) were skill-related errors. Conclusion This study provides awareness of the presence of prescription errors and frequency of the different types of errors that exist in this hospital. Understanding the different types of errors could help future studies explore the causes of specific errors and develop interventions to reduce them. Further research should be conducted to understand the causes of these errors and demonstrate whether the introduction of electronic prescriptions has an effect on patient outcomes. PMID:29445304

  5. Quantum error-correcting codes from algebraic geometry codes of Castle type

    NASA Astrophysics Data System (ADS)

    Munuera, Carlos; Tenório, Wanderson; Torres, Fernando

    2016-10-01

    We study algebraic geometry codes producing quantum error-correcting codes by the CSS construction. We pay particular attention to the family of Castle codes. We show that many of the examples known in the literature in fact belong to this family of codes. We systematize these constructions by showing the common theory that underlies all of them.

  6. [Errors in prescriptions and their preparation at the outpatient pharmacy of a regional hospital].

    PubMed

    Alvarado A, Carolina; Ossa G, Ximena; Bustos M, Luis

    2017-01-01

    Adverse effects of medications are an important cause of morbidity and hospital admissions. Errors in prescription or preparation of medications by pharmacy personnel are a factor that may influence these occurrence of the adverse effects Aim: To assess the frequency and type of errors in prescriptions and in their preparation at the pharmacy unit of a regional public hospital. Prescriptions received by ambulatory patients and those being discharged from the hospital, were reviewed using a 12-item checklist. The preparation of such prescriptions at the pharmacy unit was also reviewed using a seven item checklist. Seventy two percent of prescriptions had at least one error. The most common mistake was the impossibility of determining the concentration of the prescribed drug. Prescriptions for patients being discharged from the hospital had the higher number of errors. When a prescription had more than two drugs, the risk of error increased 2.4 times. Twenty four percent of prescription preparations had at least one error. The most common mistake was the labeling of drugs with incomplete medical indications. When a preparation included more than three drugs, the risk of preparation error increased 1.8 times. Prescription and preparation of medication delivered to patients had frequent errors. The most important risk factor for errors was the number of drugs prescribed.

  7. The distribution of refractive errors among children attending Lumbini Eye Institute, Nepal.

    PubMed

    Rai, S; Thapa, H B; Sharma, M K; Dhakhwa, K; Karki, R

    2012-01-01

    Uncorrected refractive error is an important cause of childhood blindness and visual impairment. To describe the patterns of refractive errors among children attending the outpatient clinic at the Department of Pediatric Ophthalmology, Lumbini Eye Institute, Bhairahawa, Nepal. Records of 133 children with refractive errors aged 5 - 15 years from both the urban and rural areas of Nepal and the adjacent territory of India attending the hospital between September and November 2010 were examined for patterns of refractive errors. The SPSS statistical software was used to perform data analysis. The commonest type of refractive error among the children was astigmatism (47 %) followed by myopia (34 %) and hyperopia (15 %). The refractive error was more prevalent among children of both the genders of age group 11-15 years as compared to their younger counterparts (RR = 1.22, 95 % CI = 0.66 - 2.25). The refractive error was more common (70 %) in the rural than the urban children (26 %). The rural females had a higher (38 %) prevalence of myopia than urban females (18 %). Among the children with refractive errors, only 57 % were using spectacles at the initial presentation. Astigmatism is the commonest type of refractive error among the children of age 5 - 15 years followed by hypermetropia and myopia. Refractive error remains uncorrected in a significant number of children. © NEPjOPH.

  8. Effects of skilled nursing facility structure and process factors on medication errors during nursing home admission.

    PubMed

    Lane, Sandi J; Troyer, Jennifer L; Dienemann, Jacqueline A; Laditka, Sarah B; Blanchette, Christopher M

    2014-01-01

    Older adults are at greatest risk of medication errors during the transition period of the first 7 days after admission and readmission to a skilled nursing facility (SNF). The aim of this study was to evaluate structure- and process-related factors that contribute to medication errors and harm during transition periods at a SNF. Data for medication errors and potential medication errors during the 7-day transition period for residents entering North Carolina SNFs were from the Medication Error Quality Initiative-Individual Error database from October 2006 to September 2007. The impact of SNF structure and process measures on the number of reported medication errors and harm from errors were examined using bivariate and multivariate model methods. A total of 138 SNFs reported 581 transition period medication errors; 73 (12.6%) caused harm. Chain affiliation was associated with a reduction in the volume of errors during the transition period. One third of all reported transition errors occurred during the medication administration phase of the medication use process, where dose omissions were the most common type of error; however, dose omissions caused harm less often than wrong-dose errors did. Prescribing errors were much less common than administration errors but were much more likely to cause harm. Both structure and process measures of quality were related to the volume of medication errors.However, process quality measures may play a more important role in predicting harm from errors during the transition of a resident into an SNF. Medication errors during transition could be reduced by improving both prescribing processes and transcription and documentation of orders.

  9. The Importance of Statistical Modeling in Data Analysis and Inference

    ERIC Educational Resources Information Center

    Rollins, Derrick, Sr.

    2017-01-01

    Statistical inference simply means to draw a conclusion based on information that comes from data. Error bars are the most commonly used tool for data analysis and inference in chemical engineering data studies. This work demonstrates, using common types of data collection studies, the importance of specifying the statistical model for sound…

  10. The global burden of diagnostic errors in primary care

    PubMed Central

    Singh, Hardeep; Schiff, Gordon D; Graber, Mark L; Onakpoya, Igho; Thompson, Matthew J

    2017-01-01

    Diagnosis is one of the most important tasks performed by primary care physicians. The World Health Organization (WHO) recently prioritized patient safety areas in primary care, and included diagnostic errors as a high-priority problem. In addition, a recent report from the Institute of Medicine in the USA, ‘Improving Diagnosis in Health Care’, concluded that most people will likely experience a diagnostic error in their lifetime. In this narrative review, we discuss the global significance, burden and contributory factors related to diagnostic errors in primary care. We synthesize available literature to discuss the types of presenting symptoms and conditions most commonly affected. We then summarize interventions based on available data and suggest next steps to reduce the global burden of diagnostic errors. Research suggests that we are unlikely to find a ‘magic bullet’ and confirms the need for a multifaceted approach to understand and address the many systems and cognitive issues involved in diagnostic error. Because errors involve many common conditions and are prevalent across all countries, the WHO’s leadership at a global level will be instrumental to address the problem. Based on our review, we recommend that the WHO consider bringing together primary care leaders, practicing frontline clinicians, safety experts, policymakers, the health IT community, medical education and accreditation organizations, researchers from multiple disciplines, patient advocates, and funding bodies among others, to address the many common challenges and opportunities to reduce diagnostic error. This could lead to prioritization of practice changes needed to improve primary care as well as setting research priorities for intervention development to reduce diagnostic error. PMID:27530239

  11. A Comparison of Medication Histories Obtained by a Pharmacy Technician Versus Nurses in the Emergency Department.

    PubMed

    Markovic, Marija; Mathis, A Scott; Ghin, Hoytin Lee; Gardiner, Michelle; Fahim, Germin

    2017-01-01

    To compare the medication history error rate of the emergency department (ED) pharmacy technician with that of nursing staff and to describe the workflow environment. Fifty medication histories performed by an ED nurse followed by the pharmacy technician were evaluated for discrepancies (RN-PT group). A separate 50 medication histories performed by the pharmacy technician and observed with necessary intervention by the ED pharmacist were evaluated for discrepancies (PT-RPh group). Discrepancies were totaled and categorized by type of error and therapeutic category of the medication. The workflow description was obtained by observation and staff interview. A total of 474 medications in the RN-PT group and 521 in the PT-RPh group were evaluated. Nurses made at least one error in all 50 medication histories (100%), compared to 18 medication histories for the pharmacy technician (36%). In the RN-PT group, 408 medications had at least one error, corresponding to an accuracy rate of 14% for nurses. In the PT-RPh group, 30 medications had an error, corresponding to an accuracy rate of 94.4% for the pharmacy technician ( P < 0.0001). The most common error made by nurses was a missing medication (n = 109), while the most common error for the pharmacy technician was a wrong medication frequency (n = 19). The most common drug class with documented errors for ED nurses was cardiovascular medications (n = 100), while the pharmacy technician made the most errors in gastrointestinal medications (n = 11). Medication histories obtained by the pharmacy technician were significantly more accurate than those obtained by nurses in the emergency department.

  12. Language of Mechanisms: Exam Analysis Reveals Students' Strengths, Strategies, and Errors When Using the Electron-Pushing Formalism (Curved Arrows) in New Reactions

    ERIC Educational Resources Information Center

    Flynn, Alison B.; Featherstone, Ryan B.

    2017-01-01

    This study investigated students' successes, strategies, and common errors in their answers to questions that involved the electron-pushing (curved arrow) formalism (EPF), part of organic chemistry's language. We analyzed students' answers to two question types on midterms and final exams: (1) draw the electron-pushing arrows of a reaction step,…

  13. Concomitant prescribing and dispensing errors at a Brazilian hospital: a descriptive study

    PubMed Central

    Silva, Maria das Dores Graciano; Rosa, Mário Borges; Franklin, Bryony Dean; Reis, Adriano Max Moreira; Anchieta, Lêni Márcia; Mota, Joaquim Antônio César

    2011-01-01

    OBJECTIVE: To analyze the prevalence and types of prescribing and dispensing errors occurring with high-alert medications and to propose preventive measures to avoid errors with these medications. INTRODUCTION: The prevalence of adverse events in health care has increased, and medication errors are probably the most common cause of these events. Pediatric patients are known to be a high-risk group and are an important target in medication error prevention. METHODS: Observers collected data on prescribing and dispensing errors occurring with high-alert medications for pediatric inpatients in a university hospital. In addition to classifying the types of error that occurred, we identified cases of concomitant prescribing and dispensing errors. RESULTS: One or more prescribing errors, totaling 1,632 errors, were found in 632 (89.6%) of the 705 high-alert medications that were prescribed and dispensed. We also identified at least one dispensing error in each high-alert medication dispensed, totaling 1,707 errors. Among these dispensing errors, 723 (42.4%) content errors occurred concomitantly with the prescribing errors. A subset of dispensing errors may have occurred because of poor prescription quality. The observed concomitancy should be examined carefully because improvements in the prescribing process could potentially prevent these problems. CONCLUSION: The system of drug prescribing and dispensing at the hospital investigated in this study should be improved by incorporating the best practices of medication safety and preventing medication errors. High-alert medications may be used as triggers for improving the safety of the drug-utilization system. PMID:22012039

  14. Assessing explicit error reporting in the narrative electronic medical record using keyword searching.

    PubMed

    Cao, Hui; Stetson, Peter; Hripcsak, George

    2003-01-01

    Many types of medical errors occur in and outside of hospitals, some of which have very serious consequences and increase cost. Identifying errors is a critical step for managing and preventing them. In this study, we assessed the explicit reporting of medical errors in the electronic record. We used five search terms "mistake," "error," "incorrect," "inadvertent," and "iatrogenic" to survey several sets of narrative reports including discharge summaries, sign-out notes, and outpatient notes from 1991 to 2000. We manually reviewed all the positive cases and identified them based on the reporting of physicians. We identified 222 explicitly reported medical errors. The positive predictive value varied with different keywords. In general, the positive predictive value for each keyword was low, ranging from 3.4 to 24.4%. Therapeutic-related errors were the most common reported errors and these reported therapeutic-related errors were mainly medication errors. Keyword searches combined with manual review indicated some medical errors that were reported in medical records. It had a low sensitivity and a moderate positive predictive value, which varied by search term. Physicians were most likely to record errors in the Hospital Course and History of Present Illness sections of discharge summaries. The reported errors in medical records covered a broad range and were related to several types of care providers as well as non-health care professionals.

  15. Recommendations for choosing an analysis method that controls Type I error for unbalanced cluster sample designs with Gaussian outcomes.

    PubMed

    Johnson, Jacqueline L; Kreidler, Sarah M; Catellier, Diane J; Murray, David M; Muller, Keith E; Glueck, Deborah H

    2015-11-30

    We used theoretical and simulation-based approaches to study Type I error rates for one-stage and two-stage analytic methods for cluster-randomized designs. The one-stage approach uses the observed data as outcomes and accounts for within-cluster correlation using a general linear mixed model. The two-stage model uses the cluster specific means as the outcomes in a general linear univariate model. We demonstrate analytically that both one-stage and two-stage models achieve exact Type I error rates when cluster sizes are equal. With unbalanced data, an exact size α test does not exist, and Type I error inflation may occur. Via simulation, we compare the Type I error rates for four one-stage and six two-stage hypothesis testing approaches for unbalanced data. With unbalanced data, the two-stage model, weighted by the inverse of the estimated theoretical variance of the cluster means, and with variance constrained to be positive, provided the best Type I error control for studies having at least six clusters per arm. The one-stage model with Kenward-Roger degrees of freedom and unconstrained variance performed well for studies having at least 14 clusters per arm. The popular analytic method of using a one-stage model with denominator degrees of freedom appropriate for balanced data performed poorly for small sample sizes and low intracluster correlation. Because small sample sizes and low intracluster correlation are common features of cluster-randomized trials, the Kenward-Roger method is the preferred one-stage approach. Copyright © 2015 John Wiley & Sons, Ltd.

  16. Spatial heterogeneity of type I error for local cluster detection tests

    PubMed Central

    2014-01-01

    Background Just as power, type I error of cluster detection tests (CDTs) should be spatially assessed. Indeed, CDTs’ type I error and power have both a spatial component as CDTs both detect and locate clusters. In the case of type I error, the spatial distribution of wrongly detected clusters (WDCs) can be particularly affected by edge effect. This simulation study aims to describe the spatial distribution of WDCs and to confirm and quantify the presence of edge effect. Methods A simulation of 40 000 datasets has been performed under the null hypothesis of risk homogeneity. The simulation design used realistic parameters from survey data on birth defects, and in particular, two baseline risks. The simulated datasets were analyzed using the Kulldorff’s spatial scan as a commonly used test whose behavior is otherwise well known. To describe the spatial distribution of type I error, we defined the participation rate for each spatial unit of the region. We used this indicator in a new statistical test proposed to confirm, as well as quantify, the edge effect. Results The predefined type I error of 5% was respected for both baseline risks. Results showed strong edge effect in participation rates, with a descending gradient from center to edge, and WDCs more often centrally situated. Conclusions In routine analysis of real data, clusters on the edge of the region should be carefully considered as they rarely occur when there is no cluster. Further work is needed to combine results from power studies with this work in order to optimize CDTs performance. PMID:24885343

  17. Maximum type 1 error rate inflation in multiarmed clinical trials with adaptive interim sample size modifications.

    PubMed

    Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz

    2014-07-01

    Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. The statistical validity of nursing home survey findings.

    PubMed

    Woolley, Douglas C

    2011-11-01

    The Medicare nursing home survey is a high-stakes process whose findings greatly affect nursing homes, their current and potential residents, and the communities they serve. Therefore, survey findings must achieve high validity. This study looked at the validity of one key assessment made during a nursing home survey: the observation of the rate of errors in administration of medications to residents (med-pass). Statistical analysis of the case under study and of alternative hypothetical cases. A skilled nursing home affiliated with a local medical school. The nursing home administrators and the medical director. Observational study. The probability that state nursing home surveyors make a Type I or Type II error in observing med-pass error rates, based on the current case and on a series of postulated med-pass error rates. In the common situation such as our case, where med-pass errors occur at slightly above a 5% rate after 50 observations, and therefore trigger a citation, the chance that the true rate remains above 5% after a large number of observations is just above 50%. If the true med-pass error rate were as high as 10%, and the survey team wished to achieve 75% accuracy in determining that a citation was appropriate, they would have to make more than 200 med-pass observations. In the more common situation where med pass errors are closer to 5%, the team would have to observe more than 2000 med-passes to achieve even a modest 75% accuracy in their determinations. In settings where error rates are low, large numbers of observations of an activity must be made to reach acceptable validity of estimates for the true rates of errors. In observing key nursing home functions with current methodology, the State Medicare nursing home survey process does not adhere to well-known principles of valid error determination. Alternate approaches in survey methodology are discussed. Copyright © 2011 American Medical Directors Association. Published by Elsevier Inc. All rights reserved.

  19. Inaccuracy in health research news: a typology and predictions of scientists' perceptions of the accuracy of research news.

    PubMed

    Chang, Chingching

    2015-01-01

    This article introduces an integrated inaccuracy typology to explore the prevalence of inaccurate news coverage of health research. This typology suggests that errors, omissions, and misinterpretations are three common types of inaccuracy; errors and omissions are objective, whereas misinterpretations are subjective. Objective inaccuracy involves errors and omissions in describing the background or substantive information about the research, such as how, when, where, and on whom research was conducted. Subjective inaccuracy entails misinterpretations as a result of a lack of expertise among journalists (e.g., misstating facts, errors in inferences, offering speculations as facts) or media's interest in profits (e.g., overemphasis on unique findings, overgeneralizations of findings, shifting emphases). For this study, coders analyzed objective inaccuracy, while scientists rated subjective inaccuracy. In turn, it identifies what can account for the variance in scientists' perceptions of inaccuracy in news articles citing their research. Objective and subjective inaccuracy offer significant predictors. Of the different types of objective inaccuracy, omissions of research methods represent a significant factor, whereas of the types of subjective inaccuracy, errors in inferences, overemphasis on uniqueness, and overgeneralizations of findings are all significant predictors.

  20. Using EHR Data to Detect Prescribing Errors in Rapidly Discontinued Medication Orders.

    PubMed

    Burlison, Jonathan D; McDaniel, Robert B; Baker, Donald K; Hasan, Murad; Robertson, Jennifer J; Howard, Scott C; Hoffman, James M

    2018-01-01

    Previous research developed a new method for locating prescribing errors in rapidly discontinued electronic medication orders. Although effective, the prospective design of that research hinders its feasibility for regular use. Our objectives were to assess a method to retrospectively detect prescribing errors, to characterize the identified errors, and to identify potential improvement opportunities. Electronically submitted medication orders from 28 randomly selected days that were discontinued within 120 minutes of submission were reviewed and categorized as most likely errors, nonerrors, or not enough information to determine status. Identified errors were evaluated by amount of time elapsed from original submission to discontinuation, error type, staff position, and potential clinical significance. Pearson's chi-square test was used to compare rates of errors across prescriber types. In all, 147 errors were identified in 305 medication orders. The method was most effective for orders that were discontinued within 90 minutes. Duplicate orders were most common; physicians in training had the highest error rate ( p  < 0.001), and 24 errors were potentially clinically significant. None of the errors were voluntarily reported. It is possible to identify prescribing errors in rapidly discontinued medication orders by using retrospective methods that do not require interrupting prescribers to discuss order details. Future research could validate our methods in different clinical settings. Regular use of this measure could help determine the causes of prescribing errors, track performance, and identify and evaluate interventions to improve prescribing systems and processes. Schattauer GmbH Stuttgart.

  1. Sequencing artifacts in the type A influenza databases and attempts to correct them.

    PubMed

    Suarez, David L; Chester, Nikki; Hatfield, Jason

    2014-07-01

    There are over 276 000 influenza gene sequences in public databases, with the quality of the sequences determined by the contributor. As part of a high school class project, influenza sequences with possible errors were identified in the public databases based on the size of the gene being longer than expected, with the hypothesis that these sequences would have an error. Students contacted sequence submitters alerting them of the possible sequence issue(s) and requested they the suspect sequence(s) be correct as appropriate. Type A influenza viruses were screened, and gene segments longer than the accepted size were identified for further analysis. Attention was placed on sequences with additional nucleotides upstream or downstream of the highly conserved non-coding ends of the viral segments. A total of 1081 sequences were identified that met this criterion. Three types of errors were commonly observed: non-influenza primer sequence wasn't removed from the sequence; PCR product was cloned and plasmid sequence was included in the sequence; and Taq polymerase added an adenine at the end of the PCR product. Internal insertions of nucleotide sequence were also commonly observed, but in many cases it was unclear if the sequence was correct or actually contained an error. A total of 215 sequences, or 22.8% of the suspect sequences, were corrected in the public databases in the first year of the student project. Unfortunately 138 additional sequences with possible errors were added to the databases in the second year. Additional awareness of the need for data integrity of sequences submitted to public databases is needed to fully reap the benefits of these large data sets. © 2014 The Authors. Influenza and Other Respiratory Viruses Published by John Wiley & Sons Ltd.

  2. The global burden of diagnostic errors in primary care.

    PubMed

    Singh, Hardeep; Schiff, Gordon D; Graber, Mark L; Onakpoya, Igho; Thompson, Matthew J

    2017-06-01

    Diagnosis is one of the most important tasks performed by primary care physicians. The World Health Organization (WHO) recently prioritized patient safety areas in primary care, and included diagnostic errors as a high-priority problem. In addition, a recent report from the Institute of Medicine in the USA, 'Improving Diagnosis in Health Care ', concluded that most people will likely experience a diagnostic error in their lifetime. In this narrative review, we discuss the global significance, burden and contributory factors related to diagnostic errors in primary care. We synthesize available literature to discuss the types of presenting symptoms and conditions most commonly affected. We then summarize interventions based on available data and suggest next steps to reduce the global burden of diagnostic errors. Research suggests that we are unlikely to find a 'magic bullet' and confirms the need for a multifaceted approach to understand and address the many systems and cognitive issues involved in diagnostic error. Because errors involve many common conditions and are prevalent across all countries, the WHO's leadership at a global level will be instrumental to address the problem. Based on our review, we recommend that the WHO consider bringing together primary care leaders, practicing frontline clinicians, safety experts, policymakers, the health IT community, medical education and accreditation organizations, researchers from multiple disciplines, patient advocates, and funding bodies among others, to address the many common challenges and opportunities to reduce diagnostic error. This could lead to prioritization of practice changes needed to improve primary care as well as setting research priorities for intervention development to reduce diagnostic error. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  3. A Comparison of Medication Histories Obtained by a Pharmacy Technician Versus Nurses in the Emergency Department

    PubMed Central

    Markovic, Marija; Mathis, A. Scott; Ghin, Hoytin Lee; Gardiner, Michelle; Fahim, Germin

    2017-01-01

    Purpose: To compare the medication history error rate of the emergency department (ED) pharmacy technician with that of nursing staff and to describe the workflow environment. Methods: Fifty medication histories performed by an ED nurse followed by the pharmacy technician were evaluated for discrepancies (RN-PT group). A separate 50 medication histories performed by the pharmacy technician and observed with necessary intervention by the ED pharmacist were evaluated for discrepancies (PT-RPh group). Discrepancies were totaled and categorized by type of error and therapeutic category of the medication. The workflow description was obtained by observation and staff interview. Results: A total of 474 medications in the RN-PT group and 521 in the PT-RPh group were evaluated. Nurses made at least one error in all 50 medication histories (100%), compared to 18 medication histories for the pharmacy technician (36%). In the RN-PT group, 408 medications had at least one error, corresponding to an accuracy rate of 14% for nurses. In the PT-RPh group, 30 medications had an error, corresponding to an accuracy rate of 94.4% for the pharmacy technician (P < 0.0001). The most common error made by nurses was a missing medication (n = 109), while the most common error for the pharmacy technician was a wrong medication frequency (n = 19). The most common drug class with documented errors for ED nurses was cardiovascular medications (n = 100), while the pharmacy technician made the most errors in gastrointestinal medications (n = 11). Conclusion: Medication histories obtained by the pharmacy technician were significantly more accurate than those obtained by nurses in the emergency department. PMID:28090164

  4. Refractive errors and strabismus in Down's syndrome in Korea.

    PubMed

    Han, Dae Heon; Kim, Kyun Hyung; Paik, Hae Jung

    2012-12-01

    The aims of this study were to examine the distribution of refractive errors and clinical characteristics of strabismus in Korean patients with Down's syndrome. A total of 41 Korean patients with Down's syndrome were screened for strabismus and refractive errors in 2009. A total of 41 patients with an average age of 11.9 years (range, 2 to 36 years) were screened. Eighteen patients (43.9%) had strabismus. Ten (23.4%) of 18 patients exhibited esotropia and the others had intermittent exotropia. The most frequently detected type of esotropia was acquired non-accommodative esotropia, and that of exotropia was the basic type. Fifteen patients (36.6%) had hypermetropia and 20 (48.8%) had myopia. The patients with esotropia had refractive errors of +4.89 diopters (D, ±3.73) and the patients with exotropia had refractive errors of -0.31 D (±1.78). Six of ten patients with esotropia had an accommodation weakness. Twenty one patients (63.4%) had astigmatism. Eleven (28.6%) of 21 patients had anisometropia and six (14.6%) of those had clinically significant anisometropia. In Korean patients with Down's syndrome, esotropia was more common than exotropia and hypermetropia more common than myopia. Especially, Down's syndrome patients with esotropia generally exhibit clinically significant hyperopic errors (>+3.00 D) and evidence of under-accommodation. Thus, hypermetropia and accommodation weakness could be possible factors in esotropia when it occurs in Down's syndrome patients. Based on the results of this study, eye examinations of Down's syndrome patients should routinely include a measure of accommodation at near distances, and bifocals should be considered for those with evidence of under-accommodation.

  5. Physician's error: medical or legal concept?

    PubMed

    Mujovic-Zornic, Hajrija M

    2010-06-01

    This article deals with the common term of different physician's errors that often happen in daily practice of health care. Author begins with the term of medical malpractice, defined broadly as practice of unjustified acts or failures to act upon the part of a physician or other health care professionals, which results in harm to the patient. It is a common term that includes many types of medical errors, especially physician's errors. The author also discusses the concept of physician's error in particular, which is understood no more in traditional way only as classic error in acting something manually wrong without necessary skills (medical concept), but as an error which violates patient's basic rights and which has its final legal consequence (legal concept). In every case the essential element of liability is to establish this error as a breach of the physician's duty. The first point to note is that the standard of procedure and the standard of due care against which the physician will be judged is not going to be that of the ordinary reasonable man who enjoys no medical expertise. The court's decision should give finale answer and legal qualification in each concrete case. The author's conclusion is that higher protection of human rights in the area of health equaly demands broader concept of physician's error with the accent to its legal subject matter.

  6. Medication Administration Errors in an Adult Emergency Department of a Tertiary Health Care Facility in Ghana.

    PubMed

    Acheampong, Franklin; Tetteh, Ashalley Raymond; Anto, Berko Panyin

    2016-12-01

    This study determined the incidence, types, clinical significance, and potential causes of medication administration errors (MAEs) at the emergency department (ED) of a tertiary health care facility in Ghana. This study used a cross-sectional nonparticipant observational technique. Study participants (nurses) were observed preparing and administering medication at the ED of a 2000-bed tertiary care hospital in Accra, Ghana. The observations were then compared with patients' medication charts, and identified errors were clarified with staff for possible causes. Of the 1332 observations made, involving 338 patients and 49 nurses, 362 had errors, representing 27.2%. However, the error rate excluding "lack of drug availability" fell to 12.8%. Without wrong time error, the error rate was 22.8%. The 2 most frequent error types were omission (n = 281, 77.6%) and wrong time (n = 58, 16%) errors. Omission error was mainly due to unavailability of medicine, 48.9% (n = 177). Although only one of the errors was potentially fatal, 26.7% were definitely clinically severe. The common themes that dominated the probable causes of MAEs were unavailability, staff factors, patient factors, prescription, and communication problems. This study gives credence to similar studies in different settings that MAEs occur frequently in the ED of hospitals. Most of the errors identified were not potentially fatal; however, preventive strategies need to be used to make life-saving processes such as drug administration in such specialized units error-free.

  7. A retrospective analysis of children with anisometropic amblyopia in Nepal.

    PubMed

    Sapkota, Kishor

    2014-06-01

    Anisometropia is one of the main causes of amblyopia. This study was conducted to investigate the association between the depth of amblyopia and the magnitude of anisometropia. A retrospective record review was conducted at the Nepal Eye Hospital between July 2006 and June 2011. Those children included in this study were aged ≤13 years and diagnosed with unilateral anisometropic amblyopia, no strabismus and ocular pathology. Associations between the depth of amblyopia and the age and/or gender of the subjects, the laterality of the amblyopic eyes, the type and magnitude of refractive error of amblyopic eyes, and the magnitude of anisometropia were statistically analyzed. Out of the 189 children with unilateral anisometropic amblyopia (mean age 9.1 ± 2.8 years), 59% were boys. Amblyopia was more commonly found in left eye (p < 0.001). The most common type of refractive error was astigmatism (61%). The depth of amblyopia was not associated with the gender (p = 0.864) or age (p = 0.341) of the subjects or the laterality of the eyes (p = 0.159), but it was associated with the type (p = 0.049) and magnitude (p = 0.013) of refractive error of the amblyopic eye and the magnitude of anisometropia (p = 0.002). Nepalese anisometropic amblyopic children were presented late to hospital. The depth of amblyopia was highly associated with the type and magnitude of refractive error of the amblyopic eye and the magnitude of anisometropia. So, basic vision screening programs may help to find out the anisometropic children and reefer them to the hospital for timely management of anisometropic amblyopia if present.

  8. Influence of Errors in Tactile Sensors on Some High Level Parameters Used for Manipulation with Robotic Hands.

    PubMed

    Sánchez-Durán, José A; Hidalgo-López, José A; Castellanos-Ramos, Julián; Oballe-Peinado, Óscar; Vidal-Verdú, Fernando

    2015-08-19

    Tactile sensors suffer from many types of interference and errors like crosstalk, non-linearity, drift or hysteresis, therefore calibration should be carried out to compensate for these deviations. However, this procedure is difficult in sensors mounted on artificial hands for robots or prosthetics for instance, where the sensor usually bends to cover a curved surface. Moreover, the calibration procedure should be repeated often because the correction parameters are easily altered by time and surrounding conditions. Furthermore, this intensive and complex calibration could be less determinant, or at least simpler. This is because manipulation algorithms do not commonly use the whole data set from the tactile image, but only a few parameters such as the moments of the tactile image. These parameters could be changed less by common errors and interferences, or at least their variations could be in the order of those caused by accepted limitations, like reduced spatial resolution. This paper shows results from experiments to support this idea. The experiments are carried out with a high performance commercial sensor as well as with a low-cost error-prone sensor built with a common procedure in robotics.

  9. A test of inflated zeros for Poisson regression models.

    PubMed

    He, Hua; Zhang, Hui; Ye, Peng; Tang, Wan

    2017-01-01

    Excessive zeros are common in practice and may cause overdispersion and invalidate inference when fitting Poisson regression models. There is a large body of literature on zero-inflated Poisson models. However, methods for testing whether there are excessive zeros are less well developed. The Vuong test comparing a Poisson and a zero-inflated Poisson model is commonly applied in practice. However, the type I error of the test often deviates seriously from the nominal level, rendering serious doubts on the validity of the test in such applications. In this paper, we develop a new approach for testing inflated zeros under the Poisson model. Unlike the Vuong test for inflated zeros, our method does not require a zero-inflated Poisson model to perform the test. Simulation studies show that when compared with the Vuong test our approach not only better at controlling type I error rate, but also yield more power.

  10. Modeling Inborn Errors of Hepatic Metabolism Using Induced Pluripotent Stem Cells.

    PubMed

    Pournasr, Behshad; Duncan, Stephen A

    2017-11-01

    Inborn errors of hepatic metabolism are because of deficiencies commonly within a single enzyme as a consequence of heritable mutations in the genome. Individually such diseases are rare, but collectively they are common. Advances in genome-wide association studies and DNA sequencing have helped researchers identify the underlying genetic basis of such diseases. Unfortunately, cellular and animal models that accurately recapitulate these inborn errors of hepatic metabolism in the laboratory have been lacking. Recently, investigators have exploited molecular techniques to generate induced pluripotent stem cells from patients' somatic cells. Induced pluripotent stem cells can differentiate into a wide variety of cell types, including hepatocytes, thereby offering an innovative approach to unravel the mechanisms underlying inborn errors of hepatic metabolism. Moreover, such cell models could potentially provide a platform for the discovery of therapeutics. In this mini-review, we present a brief overview of the state-of-the-art in using pluripotent stem cells for such studies. © 2017 American Heart Association, Inc.

  11. Limitations of Surface Mapping Technology in Accurately Identifying Critical Errors in Dental Students' Crown Preparations.

    PubMed

    Furness, Alan R; Callan, Richard S; Mackert, J Rodway; Mollica, Anthony G

    2018-01-01

    The aim of this study was to evaluate the effectiveness of the Planmeca Compare software in identifying and quantifying a common critical error in dental students' crown preparations. In 2014-17, a study was conducted at one U.S. dental school that evaluated an ideal crown prep made by a faculty member on a dentoform to modified preps. Two types of preparation errors were created by the addition of flowable composite to the occlusal surface of identical dies of the preparations to represent the underreduction of the distolingual cusp. The error was divided into two classes: the minor class allowed for 1 mm of occlusal clearance, and the major class allowed for no occlusal clearance. The preparations were then digitally evaluated against the ideal preparation using Planmeca Compare. Percent comparison values were obtained from each trial and averaged together. False positives and false negatives were also identified and used to determine the accuracy of the evaluation. Critical errors that did not involve a substantial change in the surface area of the preparation were inconsistently identified. Within the limitations of this study, the authors concluded that the Compare software was unable to consistently identify common critical errors within an acceptable degree of error.

  12. Prepopulated radiology report templates: a prospective analysis of error rate and turnaround time.

    PubMed

    Hawkins, C M; Hall, S; Hardin, J; Salisbury, S; Towbin, A J

    2012-08-01

    Current speech recognition software allows exam-specific standard reports to be prepopulated into the dictation field based on the radiology information system procedure code. While it is thought that prepopulating reports can decrease the time required to dictate a study and the overall number of errors in the final report, this hypothesis has not been studied in a clinical setting. A prospective study was performed. During the first week, radiologists dictated all studies using prepopulated standard reports. During the second week, all studies were dictated after prepopulated reports had been disabled. Final radiology reports were evaluated for 11 different types of errors. Each error within a report was classified individually. The median time required to dictate an exam was compared between the 2 weeks. There were 12,387 reports dictated during the study, of which, 1,173 randomly distributed reports were analyzed for errors. There was no difference in the number of errors per report between the 2 weeks; however, radiologists overwhelmingly preferred using a standard report both weeks. Grammatical errors were by far the most common error type, followed by missense errors and errors of omission. There was no significant difference in the median dictation time when comparing studies performed each week. The use of prepopulated reports does not alone affect the error rate or dictation time of radiology reports. While it is a useful feature for radiologists, it must be coupled with other strategies in order to decrease errors.

  13. Reducing medication errors in critical care: a multimodal approach

    PubMed Central

    Kruer, Rachel M; Jarrell, Andrew S; Latif, Asad

    2014-01-01

    The Institute of Medicine has reported that medication errors are the single most common type of error in health care, representing 19% of all adverse events, while accounting for over 7,000 deaths annually. The frequency of medication errors in adult intensive care units can be as high as 947 per 1,000 patient-days, with a median of 105.9 per 1,000 patient-days. The formulation of drugs is a potential contributor to medication errors. Challenges related to drug formulation are specific to the various routes of medication administration, though errors associated with medication appearance and labeling occur among all drug formulations and routes of administration. Addressing these multifaceted challenges requires a multimodal approach. Changes in technology, training, systems, and safety culture are all strategies to potentially reduce medication errors related to drug formulation in the intensive care unit. PMID:25210478

  14. Procedures for dealing with certain types of noise and systematic errors common to many Hadamard transform optical systems

    NASA Technical Reports Server (NTRS)

    Harwit, M.

    1977-01-01

    Sources of noise and error correcting procedures characteristic of Hadamard transform optical systems were investigated. Reduction of spectral noise due to noise spikes in the data, the effect of random errors, the relative performance of Fourier and Hadamard transform spectrometers operated under identical detector-noise-limited conditions, and systematic means for dealing with mask defects are among the topics discussed. The distortion in Hadamard transform optical instruments caused by moving Masks, incorrect mask alignment, missing measurements, and diffraction is analyzed and techniques for reducing or eliminating this distortion are described.

  15. Outpatient Prescribing Errors and the Impact of Computerized Prescribing

    PubMed Central

    Gandhi, Tejal K; Weingart, Saul N; Seger, Andrew C; Borus, Joshua; Burdick, Elisabeth; Poon, Eric G; Leape, Lucian L; Bates, David W

    2005-01-01

    Background Medication errors are common among inpatients and many are preventable with computerized prescribing. Relatively little is known about outpatient prescribing errors or the impact of computerized prescribing in this setting. Objective To assess the rates, types, and severity of outpatient prescribing errors and understand the potential impact of computerized prescribing. Design Prospective cohort study in 4 adult primary care practices in Boston using prescription review, patient survey, and chart review to identify medication errors, potential adverse drug events (ADEs) and preventable ADEs. Participants Outpatients over age 18 who received a prescription from 24 participating physicians. Results We screened 1879 prescriptions from 1202 patients, and completed 661 surveys (response rate 55%). Of the prescriptions, 143 (7.6%; 95% confidence interval (CI) 6.4% to 8.8%) contained a prescribing error. Three errors led to preventable ADEs and 62 (43%; 3% of all prescriptions) had potential for patient injury (potential ADEs); 1 was potentially life-threatening (2%) and 15 were serious (24%). Errors in frequency (n=77, 54%) and dose (n=26, 18%) were common. The rates of medication errors and potential ADEs were not significantly different at basic computerized prescribing sites (4.3% vs 11.0%, P=.31; 2.6% vs 4.0%, P=.16) compared to handwritten sites. Advanced checks (including dose and frequency checking) could have prevented 95% of potential ADEs. Conclusions Prescribing errors occurred in 7.6% of outpatient prescriptions and many could have harmed patients. Basic computerized prescribing systems may not be adequate to reduce errors. More advanced systems with dose and frequency checking are likely needed to prevent potentially harmful errors. PMID:16117752

  16. Risk factors for refractive errors in primary school children (6-12 years old) in Nakhon Pathom Province.

    PubMed

    Yingyong, Penpimol

    2010-11-01

    Refractive error is one of the leading causes of visual impairment in children. An analysis of risk factors for refractive error is required to reduce and prevent this common eye disease. To identify the risk factors associated with refractive errors in primary school children (6-12 year old) in Nakhon Pathom province. A population-based cross-sectional analytic study was conducted between October 2008 and September 2009 in Nakhon Pathom. Refractive error, parental refractive status, and hours per week of near activities (studying, reading books, watching television, playing with video games, or working on the computer) were assessed in 377 children who participated in this study. The most common type of refractive error in primary school children was myopia. Myopic children were more likely to have parents with myopia. Children with myopia spend more time at near activities. The multivariate odds ratio (95% confidence interval)for two myopic parents was 6.37 (2.26-17.78) and for each diopter-hour per week of near work was 1.019 (1.005-1.033). Multivariate logistic regression models show no confounding effects between parental myopia and near work suggesting that each factor has an independent association with myopia. Statistical analysis by logistic regression revealed that family history of refractive error and hours of near-work were significantly associated with refractive error in primary school children.

  17. Evaluation of near-miss and adverse events in radiation oncology using a comprehensive causal factor taxonomy.

    PubMed

    Spraker, Matthew B; Fain, Robert; Gopan, Olga; Zeng, Jing; Nyflot, Matthew; Jordan, Loucille; Kane, Gabrielle; Ford, Eric

    Incident learning systems (ILSs) are a popular strategy for improving safety in radiation oncology (RO) clinics, but few reports focus on the causes of errors in RO. The goal of this study was to test a causal factor taxonomy developed in 2012 by the American Association of Physicists in Medicine and adopted for use in the RO: Incident Learning System (RO-ILS). Three hundred event reports were randomly selected from an institutional ILS database and Safety in Radiation Oncology (SAFRON), an international ILS. The reports were split into 3 groups of 100 events each: low-risk institutional, high-risk institutional, and SAFRON. Three raters retrospectively analyzed each event for contributing factors using the American Association of Physicists in Medicine taxonomy. No events were described by a single causal factor (median, 7). The causal factor taxonomy was found to be applicable for all events, but 4 causal factors were not described in the taxonomy: linear accelerator failure (n = 3), hardware/equipment failure (n = 2), failure to follow through with a quality improvement intervention (n = 1), and workflow documentation was misleading (n = 1). The most common causal factor categories contributing to events were similar in all event types. The most common specific causal factor to contribute to events was a "slip causing physical error." Poor human factors engineering was the only causal factor found to contribute more frequently to high-risk institutional versus low-risk institutional events. The taxonomy in the study was found to be applicable for all events and may be useful in root cause analyses and future studies. Communication and human behaviors were the most common errors affecting all types of events. Poor human factors engineering was found to specifically contribute to high-risk more than low-risk institutional events, and may represent a strategy for reducing errors in all types of events. Copyright © 2017 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  18. Comparison of medication safety effectiveness among nine critical access hospitals.

    PubMed

    Cochran, Gary L; Haynatzki, Gleb

    2013-12-15

    The rates of medication errors across three different medication dispensing and administration systems frequently used in critical access hospitals (CAHs) were analyzed. Nine CAHs agreed to participate in this prospective study and were assigned to one of three groups based on similarities in their medication-use processes: (1) less than 10 hours per week of onsite pharmacy support and no bedside barcode system, (2) onsite pharmacy support for 40 hours per week and no bedside barcode system, and (3) onsite pharmacy support for 40 or more hours per week with a bedside barcode system. Errors were characterized by severity, phase of origination, type, and cause. Characteristics of the medication being administered and a number of best practices were collected for each medication pass. Logistic regression was used to identify significant predictors of errors. A total of 3103 medication passes were observed. More medication errors originated in hospitals that had onsite pharmacy support for less than 10 hours per week and no bedside barcode system than in other types of hospitals. A bedside barcode system had the greatest impact on lowering the odds of an error reaching the patient. Wrong dose and omission were common error types. Human factors and communication were the two most frequently identified causes of error for all three systems. Medication error rates were lower in CAHs with 40 or more hours per week of onsite pharmacy support with or without a bedside barcode system compared with hospitals with less than 10 hours per week of pharmacy support and no bedside barcode system.

  19. 32-Bit-Wide Memory Tolerates Failures

    NASA Technical Reports Server (NTRS)

    Buskirk, Glenn A.

    1990-01-01

    Electronic memory system of 32-bit words corrects bit errors caused by some common type of failures - even failure of entire 4-bit-wide random-access-memory (RAM) chip. Detects failure of two such chips, so user warned that ouput of memory may contain errors. Includes eight 4-bit-wide DRAM's configured so each bit of each DRAM assigned to different one of four parallel 8-bit words. Each DRAM contributes only 1 bit to each 8-bit word.

  20. Type I error probability spending for post-market drug and vaccine safety surveillance with binomial data.

    PubMed

    Silva, Ivair R

    2018-01-15

    Type I error probability spending functions are commonly used for designing sequential analysis of binomial data in clinical trials, but it is also quickly emerging for near-continuous sequential analysis of post-market drug and vaccine safety surveillance. It is well known that, for clinical trials, when the null hypothesis is not rejected, it is still important to minimize the sample size. Unlike in post-market drug and vaccine safety surveillance, that is not important. In post-market safety surveillance, specially when the surveillance involves identification of potential signals, the meaningful statistical performance measure to be minimized is the expected sample size when the null hypothesis is rejected. The present paper shows that, instead of the convex Type I error spending shape conventionally used in clinical trials, a concave shape is more indicated for post-market drug and vaccine safety surveillance. This is shown for both, continuous and group sequential analysis. Copyright © 2017 John Wiley & Sons, Ltd.

  1. Paediatric in-patient prescribing errors in Malaysia: a cross-sectional multicentre study.

    PubMed

    Khoo, Teik Beng; Tan, Jing Wen; Ng, Hoong Phak; Choo, Chong Ming; Bt Abdul Shukor, Intan Nor Chahaya; Teh, Siao Hean

    2017-06-01

    Background There is a lack of large comprehensive studies in developing countries on paediatric in-patient prescribing errors in different settings. Objectives To determine the characteristics of in-patient prescribing errors among paediatric patients. Setting General paediatric wards, neonatal intensive care units and paediatric intensive care units in government hospitals in Malaysia. Methods This is a cross-sectional multicentre study involving 17 participating hospitals. Drug charts were reviewed in each ward to identify the prescribing errors. All prescribing errors identified were further assessed for their potential clinical consequences, likely causes and contributing factors. Main outcome measures Incidence, types, potential clinical consequences, causes and contributing factors of the prescribing errors. Results The overall prescribing error rate was 9.2% out of 17,889 prescribed medications. There was no significant difference in the prescribing error rates between different types of hospitals or wards. The use of electronic prescribing had a higher prescribing error rate than manual prescribing (16.9 vs 8.2%, p < 0.05). Twenty eight (1.7%) prescribing errors were deemed to have serious potential clinical consequences and 2 (0.1%) were judged to be potentially fatal. Most of the errors were attributed to human factors, i.e. performance or knowledge deficit. The most common contributing factors were due to lack of supervision or of knowledge. Conclusions Although electronic prescribing may potentially improve safety, it may conversely cause prescribing errors due to suboptimal interfaces and cumbersome work processes. Junior doctors need specific training in paediatric prescribing and close supervision to reduce prescribing errors in paediatric in-patients.

  2. Speech errors of amnesic H.M.: unlike everyday slips-of-the-tongue.

    PubMed

    MacKay, Donald G; James, Lori E; Hadley, Christopher B; Fogler, Kethera A

    2011-03-01

    Three language production studies indicate that amnesic H.M. produces speech errors unlike everyday slips-of-the-tongue. Study 1 was a naturalistic task: H.M. and six controls closely matched for age, education, background and IQ described what makes captioned cartoons funny. Nine judges rated the descriptions blind to speaker identity and gave reliably more negative ratings for coherence, vagueness, comprehensibility, grammaticality, and adequacy of humor-description for H.M. than the controls. Study 2 examined "major errors", a novel type of speech error that is uncorrected and reduces the coherence, grammaticality, accuracy and/or comprehensibility of an utterance. The results indicated that H.M. produced seven types of major errors reliably more often than controls: substitutions, omissions, additions, transpositions, reading errors, free associations, and accuracy errors. These results contradict recent claims that H.M. retains unconscious or implicit language abilities and produces spoken discourse that is "sophisticated," "intact" and "without major errors." Study 3 examined whether three classical types of errors (omissions, additions, and substitutions of words and phrases) differed for H.M. versus controls in basic nature and relative frequency by error type. The results indicated that omissions, and especially multi-word omissions, were relatively more common for H.M. than the controls; and substitutions violated the syntactic class regularity (whereby, e.g., nouns substitute with nouns but not verbs) relatively more often for H.M. than the controls. These results suggest that H.M.'s medial temporal lobe damage impaired his ability to rapidly form new connections between units in the cortex, a process necessary to form complete and coherent internal representations for novel sentence-level plans. In short, different brain mechanisms underlie H.M.'s major errors (which reflect incomplete and incoherent sentence-level plans) versus everyday slips-of-the tongue (which reflect errors in activating pre-planned units in fully intact sentence-level plans). Implications of the results of Studies 1-3 are discussed for systems theory, binding theory and relational memory theories. Copyright © 2010 Elsevier Srl. All rights reserved.

  3. Complex Problem Solving in a Workplace Setting.

    ERIC Educational Resources Information Center

    Middleton, Howard

    2002-01-01

    Studied complex problem solving in the hospitality industry through interviews with six office staff members and managers. Findings show it is possible to construct a taxonomy of problem types and that the most common approach can be termed "trial and error." (SLD)

  4. Prescribing Errors Involving Medication Dosage Forms

    PubMed Central

    Lesar, Timothy S

    2002-01-01

    CONTEXT Prescribing errors involving medication dose formulations have been reported to occur frequently in hospitals. No systematic evaluations of the characteristics of errors related to medication dosage formulation have been performed. OBJECTIVE To quantify the characteristics, frequency, and potential adverse patient effects of prescribing errors involving medication dosage forms . DESIGN Evaluation of all detected medication prescribing errors involving or related to medication dosage forms in a 631-bed tertiary care teaching hospital. MAIN OUTCOME MEASURES Type, frequency, and potential for adverse effects of prescribing errors involving or related to medication dosage forms. RESULTS A total of 1,115 clinically significant prescribing errors involving medication dosage forms were detected during the 60-month study period. The annual number of detected errors increased throughout the study period. Detailed analysis of the 402 errors detected during the last 16 months of the study demonstrated the most common errors to be: failure to specify controlled release formulation (total of 280 cases; 69.7%) both when prescribing using the brand name (148 cases; 36.8%) and when prescribing using the generic name (132 cases; 32.8%); and prescribing controlled delivery formulations to be administered per tube (48 cases; 11.9%). The potential for adverse patient outcome was rated as potentially “fatal or severe” in 3 cases (0.7%), and “serious” in 49 cases (12.2%). Errors most commonly involved cardiovascular agents (208 cases; 51.7%). CONCLUSIONS Hospitalized patients are at risk for adverse outcomes due to prescribing errors related to inappropriate use of medication dosage forms. This information should be considered in the development of strategies to prevent adverse patient outcomes resulting from such errors. PMID:12213138

  5. Accuracy of intravenous infusion pumps in continuous renal replacement therapies.

    PubMed

    Jenkins, R; Harrison, H; Chen, B; Arnold, D; Funk, J

    1992-01-01

    Most extracorporeal continuous renal replacement therapies (CRRT) require inflow pumping of either dialysate, filtrate replacement solution, or both. Outflow of spent dialysate and ultrafiltrate can be accomplished by gravity drainage or pump. Intravenous infusion pumps have been commonly used for these purposes, although little is known about the accuracy of these pumps. To evaluate accuracy of two different types of intravenous infusion pumps used in CRRT, we studied flow rates at nine different pressure variations in three piston type and three linear peristaltic pumps. The results showed that error of either pump was not different for flow rates of 4 and 16 ml/min. Both types of pumps were affected by fluid circuit pressures, although pressure conditions under which error was low were different for each pump type. The linear peristaltic pumps were most accurate under conditions of low pump inlet pressure, whereas piston pumps were most accurate under conditions of low pump pressure gradient (outlet minus inlet) of 0 or -100 mmHg. The magnitude of error outside these conditions was substantial, reaching 12.5% for the linear peristaltic pump when inlet pressure was -100 mmHg and outlet pressure was 100 mmHg. Error may be minimized in the clinical setting by choosing the pump type best suited for the pressure conditions expected for the renal replacement modality in use.

  6. Prescribing errors in adult congenital heart disease patients admitted to a pediatric cardiovascular intensive care unit.

    PubMed

    Echeta, Genevieve; Moffett, Brady S; Checchia, Paul; Benton, Mary Kay; Klouda, Leda; Rodriguez, Fred H; Franklin, Wayne

    2014-01-01

    Adults with congenital heart disease (CHD) are often cared for at pediatric hospitals. There are no data describing the incidence or type of medication prescribing errors in adult patients admitted to a pediatric cardiovascular intensive care unit (CVICU). A review of patients >18 years of age admitted to the pediatric CVICU at our institution from 2009 to 2011 occurred. A comparator group <18 years of age but >70 kg (a typical adult weight) was identified. Medication prescribing errors were determined according to a commonly used adult drug reference. An independent panel consisting of a physician specializing in the care of adult CHD patients, a nurse, and a pharmacist evaluated all errors. Medication prescribing orders were classified as appropriate, underdose, overdose, or nonstandard (dosing per weight instead of standard adult dosing), and severity of error was classified. Eighty-five adult (74 patients) and 33 pediatric admissions (32 patients) met study criteria (mean age 27.5 ± 9.4 years, 53% male vs. 14.9 ± 1.8 years, 63% male). A cardiothoracic surgical procedure occurred in 81.4% of admissions. Adult admissions weighed less than pediatric admissions (72.8 ± 22.4 kg vs. 85.6 ± 14.9 kg, P < .01) but hospital length of stay was similar. (Adult 6 days [range 1-216 days]; pediatric 5 days [Range 2-123 days], P = .52.) A total of 112 prescribing errors were identified and they occurred less often in adults (42.4% of admissions vs. 66.7% of admissions, P = .02). Adults had a lower mean number of errors (0.7 errors per adult admission vs. 1.7 errors per pediatric admission, P < .01). Prescribing errors occurred most commonly with antimicrobials (n = 27). Underdosing was the most common category of prescribing error. Most prescribing errors were determined to have not caused harm to the patient. Prescribing errors occur frequently in adult patients admitted to a pediatric CVICU but occur more often in pediatric patients of adult weight. © 2013 Wiley Periodicals, Inc.

  7. Rare high-impact disease variants: properties and identifications.

    PubMed

    Park, Leeyoung; Kim, Ju Han

    2016-03-21

    Although many genome-wide association studies have been performed, the identification of disease polymorphisms remains important. It is now suspected that many rare disease variants induce the association signal of common variants in linkage disequilibrium (LD). Based on recent development of genetic models, the current study provides explanations of the existence of rare variants with high impacts and common variants with low impacts. Disease variants are neither necessary nor sufficient due to gene-gene or gene-environment interactions. A new method was developed based on theoretical aspects to identify both rare and common disease variants by their genotypes. Common disease variants were identified with relatively small odds ratios and relatively small sample sizes, except for specific situations in which the disease variants were in strong LD with a variant with a higher frequency. Rare disease variants with small impacts were difficult to identify without increasing sample sizes; however, the method was reasonably accurate for rare disease variants with high impacts. For rare variants, dominant variants generally showed better Type II error rates than recessive variants; however, the trend was reversed for common variants. Type II error rates increased in gene regions containing more than two disease variants because the more common variant, rather than both disease variants, was usually identified. The proposed method would be useful for identifying common disease variants with small impacts and rare disease variants with large impacts when disease variants have the same effects on disease presentation.

  8. Human errors and occupational injuries of older female workers in the residential healthcare facilities for the elderly.

    PubMed

    Kim, Jun Sik; Jeong, Byung Yong

    2018-05-03

    The study aimed to describe the characteristics of occupational injuries of female workers in the residential healthcare facilities for the elderly, and analyze human errors as causes of accidents. From the national industrial accident compensation data, 506 female injuries were analyzed by age and occupation. The results showed that medical service worker was the most prevalent (54.1%), followed by social welfare worker (20.4%). Among injuries, 55.7% were <1 year of work experience, and 37.9% were ≥60 years old. Slips/falls were the most common type of accident (42.7%), and proportion of injured by slips/falls increases with age. Among human errors, action errors were the primary reasons, followed by perception errors, and cognition errors. Besides, the ratios of injuries by perception errors and action errors increase with age, respectively. The findings of this study suggest that there is a need to design workplaces that accommodate the characteristics of older female workers.

  9. Partial pressure analysis in space testing

    NASA Technical Reports Server (NTRS)

    Tilford, Charles R.

    1994-01-01

    For vacuum-system or test-article analysis it is often desirable to know the species and partial pressures of the vacuum gases. Residual gas or Partial Pressure Analyzers (PPA's) are commonly used for this purpose. These are mass spectrometer-type instruments, most commonly employing quadrupole filters. These instruments can be extremely useful, but they should be used with caution. Depending on the instrument design, calibration procedures, and conditions of use, measurements made with these instruments can be accurate to within a few percent, or in error by two or more orders of magnitude. Significant sources of error can include relative gas sensitivities that differ from handbook values by an order of magnitude, changes in sensitivity with pressure by as much as two orders of magnitude, changes in sensitivity with time after exposure to chemically active gases, and the dependence of the sensitivity for one gas on the pressures of other gases. However, for most instruments, these errors can be greatly reduced with proper operating procedures and conditions of use. In this paper, data are presented illustrating performance characteristics for different instruments and gases, operating parameters are recommended to minimize some errors, and calibrations procedures are described that can detect and/or correct other errors.

  10. Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase

    DOE PAGES

    McInerney, Peter; Adams, Paul; Hadi, Masood Z.

    2014-01-01

    As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error ratemore » measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu , Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.« less

  11. Incidence of speech recognition errors in the emergency department.

    PubMed

    Goss, Foster R; Zhou, Li; Weiner, Scott G

    2016-09-01

    Physician use of computerized speech recognition (SR) technology has risen in recent years due to its ease of use and efficiency at the point of care. However, error rates between 10 and 23% have been observed, raising concern about the number of errors being entered into the permanent medical record, their impact on quality of care and medical liability that may arise. Our aim was to determine the incidence and types of SR errors introduced by this technology in the emergency department (ED). Level 1 emergency department with 42,000 visits/year in a tertiary academic teaching hospital. A random sample of 100 notes dictated by attending emergency physicians (EPs) using SR software was collected from the ED electronic health record between January and June 2012. Two board-certified EPs annotated the notes and conducted error analysis independently. An existing classification schema was adopted to classify errors into eight errors types. Critical errors deemed to potentially impact patient care were identified. There were 128 errors in total or 1.3 errors per note, and 14.8% (n=19) errors were judged to be critical. 71% of notes contained errors, and 15% contained one or more critical errors. Annunciation errors were the highest at 53.9% (n=69), followed by deletions at 18.0% (n=23) and added words at 11.7% (n=15). Nonsense errors, homonyms and spelling errors were present in 10.9% (n=14), 4.7% (n=6), and 0.8% (n=1) of notes, respectively. There were no suffix or dictionary errors. Inter-annotator agreement was 97.8%. This is the first estimate at classifying speech recognition errors in dictated emergency department notes. Speech recognition errors occur commonly with annunciation errors being the most frequent. Error rates were comparable if not lower than previous studies. 15% of errors were deemed critical, potentially leading to miscommunication that could affect patient care. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. Antidepressant and antipsychotic medication errors reported to United States poison control centers.

    PubMed

    Kamboj, Alisha; Spiller, Henry A; Casavant, Marcel J; Chounthirath, Thitphalak; Hodges, Nichole L; Smith, Gary A

    2018-05-08

    To investigate unintentional therapeutic medication errors associated with antidepressant and antipsychotic medications in the United States and expand current knowledge on the types of errors commonly associated with these medications. A retrospective analysis of non-health care facility unintentional therapeutic errors associated with antidepressant and antipsychotic medications was conducted using data from the National Poison Data System. From 2000 to 2012, poison control centers received 207 670 calls reporting unintentional therapeutic errors associated with antidepressant or antipsychotic medications that occurred outside of a health care facility, averaging 15 975 errors annually. The rate of antidepressant-related errors increased by 50.6% from 2000 to 2004, decreased by 6.5% from 2004 to 2006, and then increased 13.0% from 2006 to 2012. The rate of errors related to antipsychotic medications increased by 99.7% from 2000 to 2004 and then increased by 8.8% from 2004 to 2012. Overall, 70.1% of reported errors occurred among adults, and 59.3% were among females. The medications most frequently associated with errors were selective serotonin reuptake inhibitors (30.3%), atypical antipsychotics (24.1%), and other types of antidepressants (21.5%). Most medication errors took place when an individual inadvertently took or was given a medication twice (41.0%), inadvertently took someone else's medication (15.6%), or took the wrong medication (15.6%). This study provides a comprehensive overview of non-health care facility unintentional therapeutic errors associated with antidepressant and antipsychotic medications. The frequency and rate of these errors increased significantly from 2000 to 2012. Given that use of these medications is increasing in the US, this study provides important information about the epidemiology of the associated medication errors. Copyright © 2018 John Wiley & Sons, Ltd.

  13. Reflection of medical error highlighted on media in Turkey: A retrospective study

    PubMed Central

    Isik, Oguz; Bayin, Gamze; Ugurluoglu, Ozgur

    2016-01-01

    Objective: This study was performed with the aim of identifying how news on medical errors have be transmitted, and how the types, reasons, and conclusions of medical errors have been reflected to by the media in Turkey. Methods: A content analysis method was used in the study, and in this context, the data for the study was acquired by scanning five newspapers with the top editions on the national basis between the years 2012 and 2015 for the news about medical errors. Some specific selection criteria was used for the scanning of resulted news, and 116 news items acquired as a result of all the eliminations. Results: According to the results of the study; the vast majority of medical errors (40.5%) transmitted by the news resulted from the negligence of the medical staff. The medical errors were caused by physicians in the ratio of 74.1%, they most commonly occurred in state hospitals (31.9%). Another important result of the research was that medical errors resulted in either patient death to a large extent (51.7%), or permanent damage and disability to patients (25.0%). Conclusion: The news concerning medical errors provided information about the types, causes, and the results of these medical errors. It also reflected the media point of view on the issue. The examination of the content of the medical errors reported by the media were important which calls for appropriate interventions to avoid and minimize the occurrence of medical errors by improving the healthcare delivery system. PMID:27882026

  14. Patient safety in the clinical laboratory: a longitudinal analysis of specimen identification errors.

    PubMed

    Wagar, Elizabeth A; Tamashiro, Lorraine; Yasin, Bushra; Hilborne, Lee; Bruckner, David A

    2006-11-01

    Patient safety is an increasingly visible and important mission for clinical laboratories. Attention to improving processes related to patient identification and specimen labeling is being paid by accreditation and regulatory organizations because errors in these areas that jeopardize patient safety are common and avoidable through improvement in the total testing process. To assess patient identification and specimen labeling improvement after multiple implementation projects using longitudinal statistical tools. Specimen errors were categorized by a multidisciplinary health care team. Patient identification errors were grouped into 3 categories: (1) specimen/requisition mismatch, (2) unlabeled specimens, and (3) mislabeled specimens. Specimens with these types of identification errors were compared preimplementation and postimplementation for 3 patient safety projects: (1) reorganization of phlebotomy (4 months); (2) introduction of an electronic event reporting system (10 months); and (3) activation of an automated processing system (14 months) for a 24-month period, using trend analysis and Student t test statistics. Of 16,632 total specimen errors, mislabeled specimens, requisition mismatches, and unlabeled specimens represented 1.0%, 6.3%, and 4.6% of errors, respectively. Student t test showed a significant decrease in the most serious error, mislabeled specimens (P < .001) when compared to before implementation of the 3 patient safety projects. Trend analysis demonstrated decreases in all 3 error types for 26 months. Applying performance-improvement strategies that focus longitudinally on specimen labeling errors can significantly reduce errors, therefore improving patient safety. This is an important area in which laboratory professionals, working in interdisciplinary teams, can improve safety and outcomes of care.

  15. Descriptive analysis of medication errors reported to the Egyptian national online reporting system during six months.

    PubMed

    Shehata, Zahraa Hassan Abdelrahman; Sabri, Nagwa Ali; Elmelegy, Ahmed Abdelsalam

    2016-03-01

    This study analyzes reports to the Egyptian medication error (ME) reporting system from June to December 2014. Fifty hospital pharmacists received training on ME reporting using the national reporting system. All received reports were reviewed and analyzed. The pieces of data analyzed were patient age, gender, clinical setting, stage, type, medication(s), outcome, cause(s), and recommendation(s). Over the course of 6 months, 12,000 valid reports were gathered and included in this analysis. The majority (66%) came from inpatient settings, while 23% came from intensive care units, and 11% came from outpatient departments. Prescribing errors were the most common type of MEs (54%), followed by monitoring (25%) and administration errors (16%). The most frequent error was incorrect dose (20%) followed by drug interactions, incorrect drug, and incorrect frequency. Most reports were potential (25%), prevented (11%), or harmless (51%) errors; only 13% of reported errors lead to patient harm. The top three medication classes involved in reported MEs were antibiotics, drugs acting on the central nervous system, and drugs acting on the cardiovascular system. Causes of MEs were mostly lack of knowledge, environmental factors, lack of drug information sources, and incomplete prescribing. Recommendations for addressing MEs were mainly staff training, local ME reporting, and improving work environment. There are common problems among different healthcare systems, so that sharing experiences on the national level is essential to enable learning from MEs. Internationally, there is a great need for standardizing ME terminology, to facilitate knowledge transfer. Underreporting, inaccurate reporting, and a lack of reporter diversity are some limitations of this study. Egypt now has a national database of MEs that allows researchers and decision makers to assess the problem, identify its root causes, and develop preventive strategies. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  16. The computation of equating errors in international surveys in education.

    PubMed

    Monseur, Christian; Berezner, Alla

    2007-01-01

    Since the IEA's Third International Mathematics and Science Study, one of the major objectives of international surveys in education has been to report trends in achievement. The names of the two current IEA surveys reflect this growing interest: Trends in International Mathematics and Science Study (TIMSS) and Progress in International Reading Literacy Study (PIRLS). Similarly a central concern of the OECD's PISA is with trends in outcomes over time. To facilitate trend analyses these studies link their tests using common item equating in conjunction with item response modelling methods. IEA and PISA policies differ in terms of reporting the error associated with trends. In IEA surveys, the standard errors of the trend estimates do not include the uncertainty associated with the linking step while PISA does include a linking error component in the standard errors of trend estimates. In other words, PISA implicitly acknowledges that trend estimates partly depend on the selected common items, while the IEA's surveys do not recognise this source of error. Failing to recognise the linking error leads to an underestimation of the standard errors and thus increases the Type I error rate, thereby resulting in reporting of significant changes in achievement when in fact these are not significant. The growing interest of policy makers in trend indicators and the impact of the evaluation of educational reforms appear to be incompatible with such underestimation. However, the procedure implemented by PISA raises a few issues about the underlying assumptions for the computation of the equating error. After a brief introduction, this paper will describe the procedure PISA implemented to compute the linking error. The underlying assumptions of this procedure will then be discussed. Finally an alternative method based on replication techniques will be presented, based on a simulation study and then applied to the PISA 2000 data.

  17. Neuroanatomical dissociation for taxonomic and thematic knowledge in the human brain

    PubMed Central

    Schwartz, Myrna F.; Kimberg, Daniel Y.; Walker, Grant M.; Brecher, Adelyn; Faseyitan, Olufunsho K.; Dell, Gary S.; Mirman, Daniel; Coslett, H. Branch

    2011-01-01

    It is thought that semantic memory represents taxonomic information differently from thematic information. This study investigated the neural basis for the taxonomic-thematic distinction in a unique way. We gathered picture-naming errors from 86 individuals with poststroke language impairment (aphasia). Error rates were determined separately for taxonomic errors (“pear” in response to apple) and thematic errors (“worm” in response to apple), and their shared variance was regressed out of each measure. With the segmented lesions normalized to a common template, we carried out voxel-based lesion-symptom mapping on each error type separately. We found that taxonomic errors localized to the left anterior temporal lobe and thematic errors localized to the left temporoparietal junction. This is an indication that the contribution of these regions to semantic memory cleaves along taxonomic-thematic lines. Our findings show that a distinction long recognized in the psychological sciences is grounded in the structure and function of the human brain. PMID:21540329

  18. Medical errors; causes, consequences, emotional response and resulting behavioral change

    PubMed Central

    Bari, Attia; Khan, Rehan Ahmed; Rathore, Ahsan Waheed

    2016-01-01

    Objective: To determine the causes of medical errors, the emotional and behavioral response of pediatric medicine residents to their medical errors and to determine their behavior change affecting their future training. Methods: One hundred thirty postgraduate residents were included in the study. Residents were asked to complete questionnaire about their errors and responses to their errors in three domains: emotional response, learning behavior and disclosure of the error. The names of the participants were kept confidential. Data was analyzed using SPSS version 20. Results: A total of 130 residents were included. Majority 128(98.5%) of these described some form of error. Serious errors that occurred were 24(19%), 63(48%) minor, 24(19%) near misses,2(2%) never encountered an error and 17(12%) did not mention type of error but mentioned causes and consequences. Only 73(57%) residents disclosed medical errors to their senior physician but disclosure to patient’s family was negligible 15(11%). Fatigue due to long duty hours 85(65%), inadequate experience 66(52%), inadequate supervision 58(48%) and complex case 58(45%) were common causes of medical errors. Negative emotions were common and were significantly associated with lack of knowledge (p=0.001), missing warning signs (p=<0.001), not seeking advice (p=0.003) and procedural complications (p=0.001). Medical errors had significant impact on resident’s behavior; 119(93%) residents became more careful, increased advice seeking from seniors 109(86%) and 109(86%) started paying more attention to details. Intrinsic causes of errors were significantly associated with increased information seeking behavior and vigilance (p=0.003) and (p=0.01) respectively. Conclusion: Medical errors committed by residents have inadequate disclosure to senior physicians and result in negative emotions but there was positive change in their behavior, which resulted in improvement in their future training and patient care. PMID:27375682

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McInerney, Peter; Adams, Paul; Hadi, Masood Z.

    As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error ratemore » measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu , Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.« less

  20. Prevalence and types of preanalytical error in hematology laboratory of a tertiary care hospital in South India.

    PubMed

    Arul, Pitchaikaran; Pushparaj, Magesh; Pandian, Kanmani; Chennimalai, Lingasamy; Rajendran, Karthika; Selvaraj, Eniya; Masilamani, Suresh

    2018-01-01

    An important component of laboratory medicine is preanalytical phase. Since laboratory report plays a major role in patient management, more importance should be given to the quality of laboratory tests. The present study was undertaken to find the prevalence and types of preanalytical errors at a tertiary care hospital in South India. In this cross-sectional study, a total of 118,732 samples ([62,474 outpatient department [OPD] and 56,258 inpatient department [IPD]) were received in hematology laboratory. These samples were analyzed for preanalytical errors such as misidentification, incorrect vials, inadequate samples, clotted samples, diluted samples, and hemolyzed samples. The overall prevalence of preanalytical errors found was 513 samples, which is 0.43% of the total number of samples received. The most common preanalytical error observed was inadequate samples followed by clotted samples. Overall frequencies (both OPD and IPD) of preanalytical errors such as misidentification, incorrect vials, inadequate samples, clotted samples, diluted samples, and hemolyzed samples were 0.02%, 0.05%, 0.2%, 0.12%, 0.02%, and 0.03%, respectively. The present study concluded that incorrect phlebotomy techniques due to lack of awareness is the main reason for preanalytical errors. This can be avoided by proper communication and coordination between laboratory and wards, proper training and continuing medical education programs for laboratory and paramedical staffs, and knowledge of the intervening factors that can influence laboratory results.

  1. Dopamine prediction error responses integrate subjective value from different reward dimensions

    PubMed Central

    Lak, Armin; Stauffer, William R.; Schultz, Wolfram

    2014-01-01

    Prediction error signals enable us to learn through experience. These experiences include economic choices between different rewards that vary along multiple dimensions. Therefore, an ideal way to reinforce economic choice is to encode a prediction error that reflects the subjective value integrated across these reward dimensions. Previous studies demonstrated that dopamine prediction error responses reflect the value of singular reward attributes that include magnitude, probability, and delay. Obviously, preferences between rewards that vary along one dimension are completely determined by the manipulated variable. However, it is unknown whether dopamine prediction error responses reflect the subjective value integrated from different reward dimensions. Here, we measured the preferences between rewards that varied along multiple dimensions, and as such could not be ranked according to objective metrics. Monkeys chose between rewards that differed in amount, risk, and type. Because their choices were complete and transitive, the monkeys chose “as if” they integrated different rewards and attributes into a common scale of value. The prediction error responses of single dopamine neurons reflected the integrated subjective value inferred from the choices, rather than the singular reward attributes. Specifically, amount, risk, and reward type modulated dopamine responses exactly to the extent that they influenced economic choices, even when rewards were vastly different, such as liquid and food. This prediction error response could provide a direct updating signal for economic values. PMID:24453218

  2. Computer calculated dose in paediatric prescribing.

    PubMed

    Kirk, Richard C; Li-Meng Goh, Denise; Packia, Jeya; Min Kam, Huey; Ong, Benjamin K C

    2005-01-01

    Medication errors are an important cause of hospital-based morbidity and mortality. However, only a few medication error studies have been conducted in children. These have mainly quantified errors in the inpatient setting; there is very little data available on paediatric outpatient and emergency department medication errors and none on discharge medication. This deficiency is of concern because medication errors are more common in children and it has been suggested that the risk of an adverse drug event as a consequence of a medication error is higher in children than in adults. The aims of this study were to assess the rate of medication errors in predominantly ambulatory paediatric patients and the effect of computer calculated doses on medication error rates of two commonly prescribed drugs. This was a prospective cohort study performed in a paediatric unit in a university teaching hospital between March 2003 and August 2003. The hospital's existing computer clinical decision support system was modified so that doctors could choose the traditional prescription method or the enhanced method of computer calculated dose when prescribing paracetamol (acetaminophen) or promethazine. All prescriptions issued to children (<16 years of age) at the outpatient clinic, emergency department and at discharge from the inpatient service were analysed. A medication error was defined as to have occurred if there was an underdose (below the agreed value), an overdose (above the agreed value), no frequency of administration specified, no dose given or excessive total daily dose. The medication error rates and the factors influencing medication error rates were determined using SPSS version 12. From March to August 2003, 4281 prescriptions were issued. Seven prescriptions (0.16%) were excluded, hence 4274 prescriptions were analysed. Most prescriptions were issued by paediatricians (including neonatologists and paediatric surgeons) and/or junior doctors. The error rate in the children's emergency department was 15.7%, for outpatients was 21.5% and for discharge medication was 23.6%. Most errors were the result of an underdose (64%; 536/833). The computer calculated dose error rate was 12.6% compared with the traditional prescription error rate of 28.2%. Logistical regression analysis showed that computer calculated dose was an important and independent variable influencing the error rate (adjusted relative risk = 0.436, 95% CI 0.336, 0.520, p < 0.001). Other important independent variables were seniority and paediatric training of the person prescribing and the type of drug prescribed. Medication error, especially underdose, is common in outpatient, emergency department and discharge prescriptions. Computer calculated doses can significantly reduce errors, but other risk factors have to be concurrently addressed to achieve maximum benefit.

  3. Medication errors in the Middle East countries: a systematic review of the literature.

    PubMed

    Alsulami, Zayed; Conroy, Sharon; Choonara, Imti

    2013-04-01

    Medication errors are a significant global concern and can cause serious medical consequences for patients. Little is known about medication errors in Middle Eastern countries. The objectives of this systematic review were to review studies of the incidence and types of medication errors in Middle Eastern countries and to identify the main contributory factors involved. A systematic review of the literature related to medication errors in Middle Eastern countries was conducted in October 2011 using the following databases: Embase, Medline, Pubmed, the British Nursing Index and the Cumulative Index to Nursing & Allied Health Literature. The search strategy included all ages and languages. Inclusion criteria were that the studies assessed or discussed the incidence of medication errors and contributory factors to medication errors during the medication treatment process in adults or in children. Forty-five studies from 10 of the 15 Middle Eastern countries met the inclusion criteria. Nine (20 %) studies focused on medication errors in paediatric patients. Twenty-one focused on prescribing errors, 11 measured administration errors, 12 were interventional studies and one assessed transcribing errors. Dispensing and documentation errors were inadequately evaluated. Error rates varied from 7.1 % to 90.5 % for prescribing and from 9.4 % to 80 % for administration. The most common types of prescribing errors reported were incorrect dose (with an incidence rate from 0.15 % to 34.8 % of prescriptions), wrong frequency and wrong strength. Computerised physician rder entry and clinical pharmacist input were the main interventions evaluated. Poor knowledge of medicines was identified as a contributory factor for errors by both doctors (prescribers) and nurses (when administering drugs). Most studies did not assess the clinical severity of the medication errors. Studies related to medication errors in the Middle Eastern countries were relatively few in number and of poor quality. Educational programmes on drug therapy for doctors and nurses are urgently needed.

  4. tPA Prescription and Administration Errors within a Regional Stroke System

    PubMed Central

    Chung, Lee S; Tkach, Aleksander; Lingenfelter, Erin M; Dehoney, Sarah; Rollo, Jeannie; de Havenon, Adam; DeWitt, Lucy Dana; Grantz, Matthew Ryan; Wang, Haimei; Wold, Jana J; Hannon, Peter M; Weathered, Natalie R; Majersik, Jennifer J

    2015-01-01

    Background IV tPA utilization in acute ischemic stroke (AIS) requires weight-based dosing and a standardized infusion rate. In our regional network, we have tried to minimize tPA dosing errors. We describe the frequency and types of tPA administration errors made in our comprehensive stroke center (CSC) and at community hospitals (CHs) prior to transfer. Methods Using our stroke quality database, we extracted clinical and pharmacy information on all patients who received IV tPA from 2010–11 at the CSC or CH prior to transfer. All records were analyzed for the presence of inclusion/exclusion criteria deviations or tPA errors in prescription, reconstitution, dispensing, or administration, and analyzed for association with outcomes. Results We identified 131 AIS cases treated with IV tPA: 51% female; mean age 68; 32% treated at CSC, 68% at CH (including 26% by telestroke) from 22 CHs. tPA prescription and administration errors were present in 64% of all patients (41% CSC, 75% CH, p<0.001), the most common being incorrect dosage for body weight (19% CSC, 55% CH, p<0.001). Of the 27 overdoses, there were 3 deaths due to systemic hemorrhage or ICH. Nonetheless, outcomes (parenchymal hematoma, mortality, mRS) did not differ between CSC and CH patients nor between those with and without errors. Conclusion Despite focus on minimization of tPA administration errors in AIS patients, such errors were very common in our regional stroke system. Although an association between tPA errors and stroke outcomes was not demonstrated, quality assurance mechanisms are still necessary to reduce potentially dangerous, avoidable errors. PMID:26698642

  5. Common errors in textbook descriptions of muscle fiber size in nontrained humans.

    PubMed

    Chalmers, Gordon R; Row, Brandi S

    2011-09-01

    Exercise science and human anatomy and physiology textbooks commonly report that type IIB muscle fibers have the largest cross-sectional area of the three fiber types. These descriptions of muscle fiber sizes do not match with the research literature examining muscle fibers in young adult nontrained humans. For men, most commonly type IIA fibers were significantly larger than other fiber types (six out of 10 cases across six different muscles). For women, either type I, or both I and IIA muscle fibers were usually significantly the largest (five out of six cases across four different muscles). In none of these reports were type IIB fibers significantly larger than both other fiber types. In 27 studies that did not include statistical comparisons of mean fiber sizes across fiber types, in no cases were type IIB or fast glycolytic fibers larger than both type I and IIA, or slow oxidative and fast oxidative glycolytic fibers. The likely reason for mistakes in textbook descriptions of human muscle fiber sizes is that animal data were presented without being labeled as such, and without any warning that there are interspecies differences in muscle fiber properties. Correct knowledge of muscle fiber sizes may facilitate interpreting training and aging adaptations.

  6. Currie detection limits in gamma-ray spectroscopy.

    PubMed

    De Geer, Lars-Erik

    2004-01-01

    Currie Hypothesis testing is applied to gamma-ray spectral data, where an optimum part of the peak is used and the background is considered well known from nearby channels. With this, the risk of making Type I errors is about 100 times lower than commonly assumed. A programme, PeakMaker, produces random peaks with given characteristics on the screen and calculations are done to facilitate a full use of Poisson statistics in spectrum analyses. SHORT TECHNICAL NOTE SUMMARY: The Currie decision limit concept applied to spectral data is reinterpreted, which gives better consistency between the selected error risk and the observed error rates. A PeakMaker program is described and the few count problem is analyzed.

  7. Reducing sampling error in faecal egg counts from black rhinoceros (Diceros bicornis).

    PubMed

    Stringer, Andrew P; Smith, Diane; Kerley, Graham I H; Linklater, Wayne L

    2014-04-01

    Faecal egg counts (FECs) are commonly used for the non-invasive assessment of parasite load within hosts. Sources of error, however, have been identified in laboratory techniques and sample storage. Here we focus on sampling error. We test whether a delay in sample collection can affect FECs, and estimate the number of samples needed to reliably assess mean parasite abundance within a host population. Two commonly found parasite eggs in black rhinoceros (Diceros bicornis) dung, strongyle-type nematodes and Anoplocephala gigantea, were used. We find that collection of dung from the centre of faecal boluses up to six hours after defecation does not affect FECs. More than nine samples were needed to greatly improve confidence intervals of the estimated mean parasite abundance within a host population. These results should improve the cost-effectiveness and efficiency of sampling regimes, and support the usefulness of FECs when used for the non-invasive assessment of parasite abundance in black rhinoceros populations.

  8. Using a Delphi Method to Identify Human Factors Contributing to Nursing Errors.

    PubMed

    Roth, Cheryl; Brewer, Melanie; Wieck, K Lynn

    2017-07-01

    The purpose of this study was to identify human factors associated with nursing errors. Using a Delphi technique, this study used feedback from a panel of nurse experts (n = 25) on an initial qualitative survey questionnaire followed by summarizing the results with feedback and confirmation. Synthesized factors regarding causes of errors were incorporated into a quantitative Likert-type scale, and the original expert panel participants were queried a second time to validate responses. The list identified 24 items as most common causes of nursing errors, including swamping and errors made by others that nurses are expected to recognize and fix. The responses provided a consensus top 10 errors list based on means with heavy workload and fatigue at the top of the list. The use of the Delphi survey established consensus and developed a platform upon which future study of nursing errors can evolve as a link to future solutions. This list of human factors in nursing errors should serve to stimulate dialogue among nurses about how to prevent errors and improve outcomes. Human and system failures have been the subject of an abundance of research, yet nursing errors continue to occur. © 2016 Wiley Periodicals, Inc.

  9. Measuring Seebeck Coefficient

    NASA Technical Reports Server (NTRS)

    Snyder, G. Jeffrey (Inventor)

    2015-01-01

    A high temperature Seebeck coefficient measurement apparatus and method with various features to minimize typical sources of errors is described. Common sources of temperature and voltage measurement errors which may impact accurate measurement are identified and reduced. Applying the identified principles, a high temperature Seebeck measurement apparatus and method employing a uniaxial, four-point geometry is described to operate from room temperature up to 1300K. These techniques for non-destructive Seebeck coefficient measurements are simple to operate, and are suitable for bulk samples with a broad range of physical types and shapes.

  10. Medicine and aviation: a review of the comparison.

    PubMed

    Randell, R

    2003-01-01

    This paper aims to understand the nature of medical error in highly technological environments and argues that a comparison with aviation can blur its real understanding. This study is a comparative study between the notion of error in health care and aviation based on the author's own ethnographic study in intensive care units and findings from the research literature on errors in aviation. Failures in the use of medical technology are common. In attempts to understand the area of medical error, much attention has focused on how we can learn from aviation. This paper argues that such a comparison is not always useful, on the basis that (i) the type of work and technology is very different in the two domains; (ii) different issues are involved in training and procurement; and (iii) attitudes to error vary between the domains. Therefore, it is necessary to look closely at the subject of medical error and resolve those questions left unanswered by the lessons of aviation.

  11. Critical older driver errors in a national sample of serious U.S. crashes.

    PubMed

    Cicchino, Jessica B; McCartt, Anne T

    2015-07-01

    Older drivers are at increased risk of crash involvement per mile traveled. The purpose of this study was to examine older driver errors in serious crashes to determine which errors are most prevalent. The National Highway Traffic Safety Administration's National Motor Vehicle Crash Causation Survey collected in-depth, on-scene data for a nationally representative sample of 5470 U.S. police-reported passenger vehicle crashes during 2005-2007 for which emergency medical services were dispatched. There were 620 crashes involving 647 drivers aged 70 and older, representing 250,504 crash-involved older drivers. The proportion of various critical errors made by drivers aged 70 and older were compared with those made by drivers aged 35-54. Driver error was the critical reason for 97% of crashes involving older drivers. Among older drivers who made critical errors, the most common were inadequate surveillance (33%) and misjudgment of the length of a gap between vehicles or of another vehicle's speed, illegal maneuvers, medical events, and daydreaming (6% each). Inadequate surveillance (33% vs. 22%) and gap or speed misjudgment errors (6% vs. 3%) were more prevalent among older drivers than middle-aged drivers. Seventy-one percent of older drivers' inadequate surveillance errors were due to looking and not seeing another vehicle or failing to see a traffic control rather than failing to look, compared with 40% of inadequate surveillance errors among middle-aged drivers. About two-thirds (66%) of older drivers' inadequate surveillance errors and 77% of their gap or speed misjudgment errors were made when turning left at intersections. When older drivers traveled off the edge of the road or traveled over the lane line, this was most commonly due to non-performance errors such as medical events (51% and 44%, respectively), whereas middle-aged drivers were involved in these crash types for other reasons. Gap or speed misjudgment errors and inadequate surveillance errors were significantly more prevalent among female older drivers than among female middle-aged drivers, but the prevalence of these errors did not differ significantly between older and middle-aged male drivers. These errors comprised 51% of errors among older female drivers but only 31% among older male drivers. Efforts to reduce older driver crash involvements should focus on diminishing the likelihood of the most common driver errors. Countermeasures that simplify or remove the need to make left turns across traffic such as roundabouts, protected left turn signals, and diverging diamond intersection designs could decrease the frequency of inadequate surveillance and gap or speed misjudgment errors. In the future, vehicle-to-vehicle and vehicle-to-infrastructure communications may also help protect older drivers from these errors. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Verifying Safeguards Declarations with INDEPTH: A Sensitivity Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grogan, Brandon R; Richards, Scott

    2017-01-01

    A series of ORIGEN calculations were used to simulate the irradiation and decay of a number of spent fuel assemblies. These simulations focused on variations in the irradiation history that achieved the same terminal burnup through a different set of cycle histories. Simulated NDA measurements were generated for each test case from the ORIGEN data. These simulated measurement types included relative gammas, absolute gammas, absolute gammas plus neutrons, and concentrations of a set of six isotopes commonly measured by NDA. The INDEPTH code was used to reconstruct the initial enrichment, cooling time, and burnup for each irradiation using each simulatedmore » measurement type. The results were then compared to the initial ORIGEN inputs to quantify the size of the errors induced by the variations in cycle histories. Errors were compared based on the underlying changes to the cycle history, as well as the data types used for the reconstructions.« less

  13. Articulation in schoolchildren and adults with neurofibromatosis type 1.

    PubMed

    Cosyns, Marjan; Mortier, Geert; Janssens, Sandra; Bogaert, Famke; D'Hondt, Stephanie; Van Borsel, John

    2012-01-01

    Several authors mentioned the occurrence of articulation problems in the neurofibromatosis type 1 (NF1) population. However, few studies have undertaken a detailed analysis of the articulation skills of NF1 patients, especially in schoolchildren and adults. Therefore, the aim of the present study was to examine in depth the articulation skills of NF1 schoolchildren and adults, both phonetically and phonologically. Speech samples were collected from 43 Flemish NF1 patients (14 children and 29 adults), ranging in age between 7 and 53 years, using a standardized speech test in which all Flemish single speech sounds and most clusters occur in all their permissible syllable positions. Analyses concentrated on consonants only and included a phonetic inventory, a phonetic, and a phonological analysis. It was shown that phonetic inventories were incomplete in 16.28% (7/43) of participants, in which totally correct realizations of the sibilants /ʃ/ and/or /ʒ/ were missing. Phonetic analysis revealed that distortions were the predominant phonetic error type. Sigmatismus stridens, multiple ad- or interdentality, and, in children, rhotacismus non vibrans were frequently observed. From a phonological perspective, the most common error types were substitution and syllable structure errors. Particularly, devoicing, cluster simplification, and, in children, deletion of the final consonant of words were perceived. Further, it was demonstrated that significantly more men than women presented with an incomplete phonetic inventory, and that girls tended to display more articulation errors than boys. Additionally, children exhibited significantly more articulation errors than adults, suggesting that although the articulation skills of NF1 patients evolve positively with age, articulation problems do not resolve completely from childhood to adulthood. As such, the articulation errors made by NF1 adults may be regarded as residual articulation disorders. It can be concluded that the speech of NF1 patients is characterized by mild articulation disorders at an age where this is no longer expected. Readers will be able to describe neurofibromatosis type 1 (NF1) and explain the articulation errors displayed by schoolchildren and adults with this genetic syndrome. © 2011 Elsevier Inc. All rights reserved.

  14. Correcting evaluation bias of relational classifiers with network cross validation

    DOE PAGES

    Neville, Jennifer; Gallagher, Brian; Eliassi-Rad, Tina; ...

    2011-01-04

    Recently, a number of modeling techniques have been developed for data mining and machine learning in relational and network domains where the instances are not independent and identically distributed (i.i.d.). These methods specifically exploit the statistical dependencies among instances in order to improve classification accuracy. However, there has been little focus on how these same dependencies affect our ability to draw accurate conclusions about the performance of the models. More specifically, the complex link structure and attribute dependencies in relational data violate the assumptions of many conventional statistical tests and make it difficult to use these tests to assess themore » models in an unbiased manner. In this work, we examine the task of within-network classification and the question of whether two algorithms will learn models that will result in significantly different levels of performance. We show that the commonly used form of evaluation (paired t-test on overlapping network samples) can result in an unacceptable level of Type I error. Furthermore, we show that Type I error increases as (1) the correlation among instances increases and (2) the size of the evaluation set increases (i.e., the proportion of labeled nodes in the network decreases). Lastly, we propose a method for network cross-validation that combined with paired t-tests produces more acceptable levels of Type I error while still providing reasonable levels of statistical power (i.e., 1–Type II error).« less

  15. On the Hedges Correction for a "t"-Test

    ERIC Educational Resources Information Center

    VanHoudnos, Nathan M.; Greenhouse, Joel B.

    2016-01-01

    When cluster randomized experiments are analyzed as if units were independent, test statistics for treatment effects can be anticonservative. Hedges proposed a correction for such tests by scaling them to control their Type I error rate. This article generalizes the Hedges correction from a posttest-only experimental design to more common designs…

  16. A Substantive Process Analysis of Responses to Items from the Multistate Bar Examination

    ERIC Educational Resources Information Center

    Bonner, Sarah M.; D'Agostino, Jerome V.

    2012-01-01

    We investigated examinees' cognitive processes while they solved selected items from the Multistate Bar Exam (MBE), a high-stakes professional certification examination. We focused on ascertaining those mental processes most frequently used by examinees, and the most common types of errors in their thinking. We compared the relationships between…

  17. [The measurement of data quality in censuses of population and housing].

    PubMed

    1980-01-01

    The determination of data quality in population and housing censuses is discussed. Principal types of errors commonly found in census data are reviewed, and the parameters used to evaluate data quality are described. Various methods for measuring data quality are outlined and possible applications of the methods are illustrated using Canadian examples

  18. Fisher's method of combining dependent statistics using generalizations of the gamma distribution with applications to genetic pleiotropic associations.

    PubMed

    Li, Qizhai; Hu, Jiyuan; Ding, Juan; Zheng, Gang

    2014-04-01

    A classical approach to combine independent test statistics is Fisher's combination of $p$-values, which follows the $\\chi ^2$ distribution. When the test statistics are dependent, the gamma distribution (GD) is commonly used for the Fisher's combination test (FCT). We propose to use two generalizations of the GD: the generalized and the exponentiated GDs. We study some properties of mis-using the GD for the FCT to combine dependent statistics when one of the two proposed distributions are true. Our results show that both generalizations have better control of type I error rates than the GD, which tends to have inflated type I error rates at more extreme tails. In practice, common model selection criteria (e.g. Akaike information criterion/Bayesian information criterion) can be used to help select a better distribution to use for the FCT. A simple strategy of the two generalizations of the GD in genome-wide association studies is discussed. Applications of the results to genetic pleiotrophic associations are described, where multiple traits are tested for association with a single marker.

  19. Transmission and storage of medical images with patient information.

    PubMed

    Acharya U, Rajendra; Subbanna Bhat, P; Kumar, Sathish; Min, Lim Choo

    2003-07-01

    Digital watermarking is a technique of hiding specific identification data for copyright authentication. This technique is adapted here for interleaving patient information with medical images, to reduce storage and transmission overheads. The text data is encrypted before interleaving with images to ensure greater security. The graphical signals are interleaved with the image. Two types of error control-coding techniques are proposed to enhance reliability of transmission and storage of medical images interleaved with patient information. Transmission and storage scenarios are simulated with and without error control coding and a qualitative as well as quantitative interpretation of the reliability enhancement resulting from the use of various commonly used error control codes such as repetitive, and (7,4) Hamming code is provided.

  20. Investigating the Causes of Medication Errors and Strategies to Prevention of Them from Nurses and Nursing Student Viewpoint

    PubMed Central

    Gorgich, Enam Alhagh Charkhat; Barfroshan, Sanam; Ghoreishi, Gholamreza; Yaghoobi, Maryam

    2016-01-01

    Introduction and Aim: Medication errors as a serious problem in world and one of the most common medical errors that threaten patient safety and may lead to even death of them. The purpose of this study was to investigate the causes of medication errors and strategies to prevention of them from nurses and nursing student viewpoint. Materials & Methods: This cross-sectional descriptive study was conducted on 327 nursing staff of khatam-al-anbia hospital and 62 intern nursing students in nursing and midwifery school of Zahedan, Iran, enrolled through the availability sampling in 2015. The data were collected by the valid and reliable questionnaire. To analyze the data, descriptive statistics, T-test and ANOVA were applied by use of SPSS16 software. Findings: The results showed that the most common causes of medications errors in nursing were tiredness due increased workload (97.8%), and in nursing students were drug calculation, (77.4%). The most important way for prevention in nurses and nursing student opinion, was reducing the work pressure by increasing the personnel, proportional to the number and condition of patients and also creating a unit as medication calculation. Also there was a significant relationship between the type of ward and the mean of medication errors in two groups. Conclusion: Based on the results it is recommended that nurse-managers resolve the human resources problem, provide workshops and in-service education about preparing medications, side-effects of drugs and pharmacological knowledge. Using electronic medications cards is a measure which reduces medications errors. PMID:27045413

  1. Cognitive error as the most frequent contributory factor in cases of medical injury: a study on verdict's judgment among closed claims in Japan.

    PubMed

    Tokuda, Yasuharu; Kishida, Naoki; Konishi, Ryota; Koizumi, Shunzo

    2011-03-01

    Cognitive errors in the course of clinical decision-making are prevalent in many cases of medical injury. We used information on verdict's judgment from closed claims files to determine the important cognitive factors associated with cases of medical injury. Data were collected from claims closed between 2001 to 2005 at district courts in Tokyo and Osaka, Japan. In each case, we recorded all the contributory cognitive, systemic, and patient-related factors judged in the verdicts to be causally related to the medical injury. We also analyzed the association between cognitive factors and cases involving paid compensation using a multivariable logistic regression model. Among 274 cases (mean age 49 years old; 45% women), there were 122 (45%) deaths and 67 (24%) major injuries (incomplete recovery within a year). In 103 cases (38%), the verdicts ordered hospitals to pay compensation (median; 8,000,000 Japanese Yen). An error in judgment (199/274, 73%) and failure of vigilance (177/274, 65%) were the most prevalent causative cognitive factors, and error in judgment was also significantly associated with paid compensation (odds ratio, 1.9; 95% confidence interval [CI], 1.0-3.4). Systemic causative factors including poor teamwork (11/274, 4%) and technology failure (5/274, 2%) were less common. The closed claims analysis based on verdict's judgment showed that cognitive errors were common in cases of medical injury, with an error in judgment being most prevalent and closely associated with compensation payment. Reduction of this type of error is required to produce safer healthcare. 2010 Society of Hospital Medicine.

  2. Obstetric Neuraxial Drug Administration Errors: A Quantitative and Qualitative Analytical Review.

    PubMed

    Patel, Santosh; Loveridge, Robert

    2015-12-01

    Drug administration errors in obstetric neuraxial anesthesia can have devastating consequences. Although fully recognizing that they represent "only the tip of the iceberg," published case reports/series of these errors were reviewed in detail with the aim of estimating the frequency and the nature of these errors. We identified case reports and case series from MEDLINE and performed a quantitative analysis of the involved drugs, error setting, source of error, the observed complications, and any therapeutic interventions. We subsequently performed a qualitative analysis of the human factors involved and proposed modifications to practice. Twenty-nine cases were identified. Various drugs were given in error, but no direct effects on the course of labor, mode of delivery, or neonatal outcome were reported. Four maternal deaths from the accidental intrathecal administration of tranexamic acid were reported, all occurring after delivery of the fetus. A range of hemodynamic and neurologic signs and symptoms were noted, but the most commonly reported complication was the failure of the intended neuraxial anesthetic technique. Several human factors were present; most common factors were drug storage issues and similar drug appearance. Four practice recommendations were identified as being likely to have prevented the errors. The reported errors exposed latent conditions within health care systems. We suggest that the implementation of the following processes may decrease the risk of these types of drug errors: (1) Careful reading of the label on any drug ampule or syringe before the drug is drawn up or injected; (2) labeling all syringes; (3) checking labels with a second person or a device (such as a barcode reader linked to a computer) before the drug is drawn up or administered; and (4) use of non-Luer lock connectors on all epidural/spinal/combined spinal-epidural devices. Further study is required to determine whether routine use of these processes will reduce drug error.

  3. Procrustes-based geometric morphometrics on MRI images: An example of inter-operator bias in 3D landmarks and its impact on big datasets.

    PubMed

    Daboul, Amro; Ivanovska, Tatyana; Bülow, Robin; Biffar, Reiner; Cardini, Andrea

    2018-01-01

    Using 3D anatomical landmarks from adult human head MRIs, we assessed the magnitude of inter-operator differences in Procrustes-based geometric morphometric analyses. An in depth analysis of both absolute and relative error was performed in a subsample of individuals with replicated digitization by three different operators. The effect of inter-operator differences was also explored in a large sample of more than 900 individuals. Although absolute error was not unusual for MRI measurements, including bone landmarks, shape was particularly affected by differences among operators, with up to more than 30% of sample variation accounted for by this type of error. The magnitude of the bias was such that it dominated the main pattern of bone and total (all landmarks included) shape variation, largely surpassing the effect of sex differences between hundreds of men and women. In contrast, however, we found higher reproducibility in soft-tissue nasal landmarks, despite relatively larger errors in estimates of nasal size. Our study exemplifies the assessment of measurement error using geometric morphometrics on landmarks from MRIs and stresses the importance of relating it to total sample variance within the specific methodological framework being used. In summary, precise landmarks may not necessarily imply negligible errors, especially in shape data; indeed, size and shape may be differentially impacted by measurement error and different types of landmarks may have relatively larger or smaller errors. Importantly, and consistently with other recent studies using geometric morphometrics on digital images (which, however, were not specific to MRI data), this study showed that inter-operator biases can be a major source of error in the analysis of large samples, as those that are becoming increasingly common in the 'era of big data'.

  4. Procrustes-based geometric morphometrics on MRI images: An example of inter-operator bias in 3D landmarks and its impact on big datasets

    PubMed Central

    Ivanovska, Tatyana; Bülow, Robin; Biffar, Reiner; Cardini, Andrea

    2018-01-01

    Using 3D anatomical landmarks from adult human head MRIs, we assessed the magnitude of inter-operator differences in Procrustes-based geometric morphometric analyses. An in depth analysis of both absolute and relative error was performed in a subsample of individuals with replicated digitization by three different operators. The effect of inter-operator differences was also explored in a large sample of more than 900 individuals. Although absolute error was not unusual for MRI measurements, including bone landmarks, shape was particularly affected by differences among operators, with up to more than 30% of sample variation accounted for by this type of error. The magnitude of the bias was such that it dominated the main pattern of bone and total (all landmarks included) shape variation, largely surpassing the effect of sex differences between hundreds of men and women. In contrast, however, we found higher reproducibility in soft-tissue nasal landmarks, despite relatively larger errors in estimates of nasal size. Our study exemplifies the assessment of measurement error using geometric morphometrics on landmarks from MRIs and stresses the importance of relating it to total sample variance within the specific methodological framework being used. In summary, precise landmarks may not necessarily imply negligible errors, especially in shape data; indeed, size and shape may be differentially impacted by measurement error and different types of landmarks may have relatively larger or smaller errors. Importantly, and consistently with other recent studies using geometric morphometrics on digital images (which, however, were not specific to MRI data), this study showed that inter-operator biases can be a major source of error in the analysis of large samples, as those that are becoming increasingly common in the 'era of big data'. PMID:29787586

  5. Evaluating methods of correcting for multiple comparisons implemented in SPM12 in social neuroscience fMRI studies: an example from moral psychology.

    PubMed

    Han, Hyemin; Glenn, Andrea L

    2018-06-01

    In fMRI research, the goal of correcting for multiple comparisons is to identify areas of activity that reflect true effects, and thus would be expected to replicate in future studies. Finding an appropriate balance between trying to minimize false positives (Type I error) while not being too stringent and omitting true effects (Type II error) can be challenging. Furthermore, the advantages and disadvantages of these types of errors may differ for different areas of study. In many areas of social neuroscience that involve complex processes and considerable individual differences, such as the study of moral judgment, effects are typically smaller and statistical power weaker, leading to the suggestion that less stringent corrections that allow for more sensitivity may be beneficial and also result in more false positives. Using moral judgment fMRI data, we evaluated four commonly used methods for multiple comparison correction implemented in Statistical Parametric Mapping 12 by examining which method produced the most precise overlap with results from a meta-analysis of relevant studies and with results from nonparametric permutation analyses. We found that voxelwise thresholding with familywise error correction based on Random Field Theory provides a more precise overlap (i.e., without omitting too few regions or encompassing too many additional regions) than either clusterwise thresholding, Bonferroni correction, or false discovery rate correction methods.

  6. Personal digital assistant-based drug information sources: potential to improve medication safety.

    PubMed

    Galt, Kimberly A; Rule, Ann M; Houghton, Bruce; Young, Daniel O; Remington, Gina

    2005-04-01

    This study compared the potential for personal digital assistant (PDA)-based drug information sources to minimize potential medication errors dependent on accurate and complete drug information at the point of care. A quality and safety framework for drug information resources was developed to evaluate 11 PDA-based drug information sources. Three drug information sources met the criteria of the framework: Eprocrates Rx Pro, Lexi-Drugs, and mobileMICROMEDEX. Medication error types related to drug information at the point of care were then determined. Forty-seven questions were developed to test the potential of the sources to prevent these error types. Pharmacists and physician experts from Creighton University created these questions based on the most common types of questions asked by primary care providers. Three physicians evaluated the drug information sources, rating the source for each question: 1=no information available, 2=some information available, or 3 = adequate amount of information available. The mean ratings for the drug information sources were: 2.0 (Eprocrates Rx Pro), 2.5 (Lexi-Drugs), and 2.03 (mobileMICROMEDEX). Lexi-Drugs was significantly better (mobileMICROMEDEX t test; P=0.05; Eprocrates Rx Pro t test; P=0.01). Lexi-Drugs was found to be the most specific and complete PDA resource available to optimize medication safety by reducing potential errors associated with drug information. No resource was sufficient to address the patient safety information needs for all cases.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maasakkers, Joannes D.; Jacob, Daniel J.; Sulprizio, Melissa P.

    Here we present a gridded inventory of US anthropogenic methane emissions with 0.1° × 0.1° spatial resolution, monthly temporal resolution, and detailed scaledependent error characterization. The inventory is designed to be consistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissions and Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a wide range of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show largemore » differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Finally, our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.« less

  8. Gridded National Inventory of U.S. Methane Emissions

    NASA Technical Reports Server (NTRS)

    Maasakkers, Joannes D.; Jacob, Daniel J.; Sulprizio, Melissa P.; Turner, Alexander J.; Weitz, Melissa; Wirth, Tom; Hight, Cate; DeFigueiredo, Mark; Desai, Mausami; Schmeltz, Rachel; hide

    2016-01-01

    We present a gridded inventory of US anthropogenic methane emissions with 0.1 deg x 0.1 deg spatial resolution, monthly temporal resolution, and detailed scale dependent error characterization. The inventory is designed to be onsistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissionsand Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a widerange of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show large differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.

  9. Gridded national inventory of U.S. methane emissions

    DOE PAGES

    Maasakkers, Joannes D.; Jacob, Daniel J.; Sulprizio, Melissa P.; ...

    2016-11-16

    Here we present a gridded inventory of US anthropogenic methane emissions with 0.1° × 0.1° spatial resolution, monthly temporal resolution, and detailed scaledependent error characterization. The inventory is designed to be consistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissions and Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a wide range of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show largemore » differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Finally, our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.« less

  10. Gridded National Inventory of U.S. Methane Emissions.

    PubMed

    Maasakkers, Joannes D; Jacob, Daniel J; Sulprizio, Melissa P; Turner, Alexander J; Weitz, Melissa; Wirth, Tom; Hight, Cate; DeFigueiredo, Mark; Desai, Mausami; Schmeltz, Rachel; Hockstad, Leif; Bloom, Anthony A; Bowman, Kevin W; Jeong, Seongeun; Fischer, Marc L

    2016-12-06

    We present a gridded inventory of US anthropogenic methane emissions with 0.1° × 0.1° spatial resolution, monthly temporal resolution, and detailed scale-dependent error characterization. The inventory is designed to be consistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissions and Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a wide range of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show large differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.

  11. Epidemiology of dental professional liability.

    PubMed

    Montagna, F; Cortesini, C; Manca, R; Montagna, L; Piras, A; Manfredini, D

    2011-04-01

    The aim of this article is to collect data relating to dental professional liability in Italy and provide a common platform for discussions among clinicians, legal medicine practitioners, and experts in law. On the basis of two different dental-legal statistical samples (1,670 reports of legal dental experts and 320 civil court decisions) we analyzed the dental professional liability lawsuit in the areas of distribution of lawsuits among the different dental specialties, recurrence and type of errors, outcome of civil suits, parameters of compensation. Some ideas are also proposed for possible strategies in the management of clinical risk (prevention of errors) and court proceedings.

  12. On the interaction of deaffrication and consonant harmony*

    PubMed Central

    Dinnsen, Daniel A.; Gierut, Judith A.; Morrisette, Michele L.; Green, Christopher R.; Farris-Trimble, Ashley W.

    2010-01-01

    Error patterns in children’s phonological development are often described as simplifying processes that can interact with one another with different consequences. Some interactions limit the applicability of an error pattern, and others extend it to more words. Theories predict that error patterns interact to their full potential. While specific interactions have been documented for certain pairs of processes, no developmental study has shown that the range of typologically predicted interactions occurs for those processes. To determine whether this anomaly is an accidental gap or a systematic peculiarity of particular error patterns, two commonly occurring processes were considered, namely Deaffrication and Consonant Harmony. Results are reported from a cross-sectional and longitudinal study of 12 children (age 3;0 – 5;0) with functional phonological delays. Three interaction types were attested to varying degrees. The longitudinal results further instantiated the typology and revealed a characteristic trajectory of change. Implications of these findings are explored. PMID:20513256

  13. Updating expected action outcome in the medial frontal cortex involves an evaluation of error type.

    PubMed

    Maier, Martin E; Steinhauser, Marco

    2013-10-02

    Forming expectations about the outcome of an action is an important prerequisite for action control and reinforcement learning in the human brain. The medial frontal cortex (MFC) has been shown to play an important role in the representation of outcome expectations, particularly when an update of expected outcome becomes necessary because an error is detected. However, error detection alone is not always sufficient to compute expected outcome because errors can occur in various ways and different types of errors may be associated with different outcomes. In the present study, we therefore investigate whether updating expected outcome in the human MFC is based on an evaluation of error type. Our approach was to consider an electrophysiological correlate of MFC activity on errors, the error-related negativity (Ne/ERN), in a task in which two types of errors could occur. Because the two error types were associated with different amounts of monetary loss, updating expected outcomes on error trials required an evaluation of error type. Our data revealed a pattern of Ne/ERN amplitudes that closely mirrored the amount of monetary loss associated with each error type, suggesting that outcome expectations are updated based on an evaluation of error type. We propose that this is achieved by a proactive evaluation process that anticipates error types by continuously monitoring error sources or by dynamically representing possible response-outcome relations.

  14. Medication errors reported to the National Medication Error Reporting System in Malaysia: a 4-year retrospective review (2009 to 2012).

    PubMed

    Samsiah, A; Othman, Noordin; Jamshed, Shazia; Hassali, Mohamed Azmi; Wan-Mohaina, W M

    2016-12-01

    Reporting and analysing the data on medication errors (MEs) is important and contributes to a better understanding of the error-prone environment. This study aims to examine the characteristics of errors submitted to the National Medication Error Reporting System (MERS) in Malaysia. A retrospective review of reports received from 1 January 2009 to 31 December 2012 was undertaken. Descriptive statistics method was applied. A total of 17,357 MEs reported were reviewed. The majority of errors were from public-funded hospitals. Near misses were classified in 86.3 % of the errors. The majority of errors (98.1 %) had no harmful effects on the patients. Prescribing contributed to more than three-quarters of the overall errors (76.1 %). Pharmacists detected and reported the majority of errors (92.1 %). Cases of erroneous dosage or strength of medicine (30.75 %) were the leading type of error, whilst cardiovascular (25.4 %) was the most common category of drug found. MERS provides rich information on the characteristics of reported MEs. Low contribution to reporting from healthcare facilities other than government hospitals and non-pharmacists requires further investigation. Thus, a feasible approach to promote MERS among healthcare providers in both public and private sectors needs to be formulated and strengthened. Preventive measures to minimise MEs should be directed to improve prescribing competency among the fallible prescribers identified.

  15. Assessment of the knowledge and attitudes of intern doctors to medication prescribing errors in a Nigeria tertiary hospital

    PubMed Central

    Ajemigbitse, Adetutu A.; Omole, Moses Kayode; Ezike, Nnamdi Chika; Erhun, Wilson O.

    2013-01-01

    Context: Junior doctors are reported to make most of the prescribing errors in the hospital setting. Aims: The aim of the following study is to determine the knowledge intern doctors have about prescribing errors and circumstances contributing to making them. Settings and Design: A structured questionnaire was distributed to intern doctors in National Hospital Abuja Nigeria. Subjects and Methods: Respondents gave information about their experience with prescribing medicines, the extent to which they agreed with the definition of a clinically meaningful prescribing error and events that constituted such. Their experience with prescribing certain categories of medicines was also sought. Statistical Analysis Used: Data was analyzed with Statistical Package for the Social Sciences (SPSS) software version 17 (SPSS Inc Chicago, Ill, USA). Chi-squared analysis contrasted differences in proportions; P < 0.05 was considered to be statistically significant. Results: The response rate was 90.9% and 27 (90%) had <1 year of prescribing experience. 17 (56.7%) respondents totally agreed with the definition of a clinically meaningful prescribing error. Most common reasons for prescribing mistakes were a failure to check prescriptions with a reference source (14, 25.5%) and failure to check for adverse drug interactions (14, 25.5%). Omitting some essential information such as duration of therapy (13, 20%), patient age (14, 21.5%) and dosage errors (14, 21.5%) were the most common types of prescribing errors made. Respondents considered workload (23, 76.7%), multitasking (19, 63.3%), rushing (18, 60.0%) and tiredness/stress (16, 53.3%) as important factors contributing to prescribing errors. Interns were least confident prescribing antibiotics (12, 25.5%), opioid analgesics (12, 25.5%) cytotoxics (10, 21.3%) and antipsychotics (9, 19.1%) unsupervised. Conclusions: Respondents seemed to have a low awareness of making prescribing errors. Principles of rational prescribing and events that constitute prescribing errors should be taught in the practice setting. PMID:24808682

  16. Assessment of the knowledge and attitudes of intern doctors to medication prescribing errors in a Nigeria tertiary hospital.

    PubMed

    Ajemigbitse, Adetutu A; Omole, Moses Kayode; Ezike, Nnamdi Chika; Erhun, Wilson O

    2013-12-01

    Junior doctors are reported to make most of the prescribing errors in the hospital setting. The aim of the following study is to determine the knowledge intern doctors have about prescribing errors and circumstances contributing to making them. A structured questionnaire was distributed to intern doctors in National Hospital Abuja Nigeria. Respondents gave information about their experience with prescribing medicines, the extent to which they agreed with the definition of a clinically meaningful prescribing error and events that constituted such. Their experience with prescribing certain categories of medicines was also sought. Data was analyzed with Statistical Package for the Social Sciences (SPSS) software version 17 (SPSS Inc Chicago, Ill, USA). Chi-squared analysis contrasted differences in proportions; P < 0.05 was considered to be statistically significant. The response rate was 90.9% and 27 (90%) had <1 year of prescribing experience. 17 (56.7%) respondents totally agreed with the definition of a clinically meaningful prescribing error. Most common reasons for prescribing mistakes were a failure to check prescriptions with a reference source (14, 25.5%) and failure to check for adverse drug interactions (14, 25.5%). Omitting some essential information such as duration of therapy (13, 20%), patient age (14, 21.5%) and dosage errors (14, 21.5%) were the most common types of prescribing errors made. Respondents considered workload (23, 76.7%), multitasking (19, 63.3%), rushing (18, 60.0%) and tiredness/stress (16, 53.3%) as important factors contributing to prescribing errors. Interns were least confident prescribing antibiotics (12, 25.5%), opioid analgesics (12, 25.5%) cytotoxics (10, 21.3%) and antipsychotics (9, 19.1%) unsupervised. Respondents seemed to have a low awareness of making prescribing errors. Principles of rational prescribing and events that constitute prescribing errors should be taught in the practice setting.

  17. A Comparison of the Rasch Separate Calibration and Between-Fit Methods of Detecting Item Bias.

    ERIC Educational Resources Information Center

    Smith, Richard M.

    1996-01-01

    The separate calibration t-test approach of B. Wright and M. Stone (1979) and the common calibration between-fit approach of B. Wright, R. Mead, and R. Draba (1976) appeared to have similar Type I error rates and similar power to detect item bias within a Rasch framework. (SLD)

  18. Error rates in forensic DNA analysis: definition, numbers, impact and communication.

    PubMed

    Kloosterman, Ate; Sjerps, Marjan; Quak, Astrid

    2014-09-01

    Forensic DNA casework is currently regarded as one of the most important types of forensic evidence, and important decisions in intelligence and justice are based on it. However, errors occasionally occur and may have very serious consequences. In other domains, error rates have been defined and published. The forensic domain is lagging behind concerning this transparency for various reasons. In this paper we provide definitions and observed frequencies for different types of errors at the Human Biological Traces Department of the Netherlands Forensic Institute (NFI) over the years 2008-2012. Furthermore, we assess their actual and potential impact and describe how the NFI deals with the communication of these numbers to the legal justice system. We conclude that the observed relative frequency of quality failures is comparable to studies from clinical laboratories and genetic testing centres. Furthermore, this frequency is constant over the five-year study period. The most common causes of failures related to the laboratory process were contamination and human error. Most human errors could be corrected, whereas gross contamination in crime samples often resulted in irreversible consequences. Hence this type of contamination is identified as the most significant source of error. Of the known contamination incidents, most were detected by the NFI quality control system before the report was issued to the authorities, and thus did not lead to flawed decisions like false convictions. However in a very limited number of cases crucial errors were detected after the report was issued, sometimes with severe consequences. Many of these errors were made in the post-analytical phase. The error rates reported in this paper are useful for quality improvement and benchmarking, and contribute to an open research culture that promotes public trust. However, they are irrelevant in the context of a particular case. Here case-specific probabilities of undetected errors are needed. These should be reported, separately from the match probability, when requested by the court or when there are internal or external indications for error. It should also be made clear that there are various other issues to consider, like DNA transfer. Forensic statistical models, in particular Bayesian networks, may be useful to take the various uncertainties into account and demonstrate their effects on the evidential value of the forensic DNA results. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  19. Spatiotemporal Filtering Using Principal Component Analysis and Karhunen-Loeve Expansion Approaches for Regional GPS Network Analysis

    NASA Technical Reports Server (NTRS)

    Dong, D.; Fang, P.; Bock, F.; Webb, F.; Prawirondirdjo, L.; Kedar, S.; Jamason, P.

    2006-01-01

    Spatial filtering is an effective way to improve the precision of coordinate time series for regional GPS networks by reducing so-called common mode errors, thereby providing better resolution for detecting weak or transient deformation signals. The commonly used approach to regional filtering assumes that the common mode error is spatially uniform, which is a good approximation for networks of hundreds of kilometers extent, but breaks down as the spatial extent increases. A more rigorous approach should remove the assumption of spatially uniform distribution and let the data themselves reveal the spatial distribution of the common mode error. The principal component analysis (PCA) and the Karhunen-Loeve expansion (KLE) both decompose network time series into a set of temporally varying modes and their spatial responses. Therefore they provide a mathematical framework to perform spatiotemporal filtering.We apply the combination of PCA and KLE to daily station coordinate time series of the Southern California Integrated GPS Network (SCIGN) for the period 2000 to 2004. We demonstrate that spatially and temporally correlated common mode errors are the dominant error source in daily GPS solutions. The spatial characteristics of the common mode errors are close to uniform for all east, north, and vertical components, which implies a very long wavelength source for the common mode errors, compared to the spatial extent of the GPS network in southern California. Furthermore, the common mode errors exhibit temporally nonrandom patterns.

  20. Analysis of methods commonly used in biomedicine for treatment versus control comparison of very small samples.

    PubMed

    Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M

    2018-04-01

    A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Quality of death notification forms in North West Bank/Palestine: a descriptive study.

    PubMed

    Qaddumi, Jamal A S; Nazzal, Zaher; Yacoup, Allam R S; Mansour, Mahmoud

    2017-04-11

    The death notification forms (DNFs) are important documents. Thus, inability to fill it properly by physicians will affect the national mortality report and, consequently, the evidence-based decision making. The errors in filling DNFs are common all over the world and are different in types and causes. We aimed to evaluate the quality of DNFs in terms of completeness and types of errors in the cause of death section. A descriptive study was conducted to review 2707 DNFs in North West Bank/Palestine during the year 2012 using data abstraction sheets. SPSS 17.0 was used to show the frequency of major and minor errors committed in filling the DNFs. Surprisingly, only 1% of the examined DNFs had their cause of death section filled completely correct. The immediate cause of death was correctly identified in 5.9% of all DNFs and the underlying cause of death was correctly reported in 55.4% of them. The sequence was incorrect in 41.5% of the DNFs. The most frequently documented minor error was "Not writing Time intervals" error (97.0%). Almost all DNFs contained at least one minor or major error. This high percentage of errors may affect the mortality and morbidity statistics, public health research and the process of providing evidence for health policy. Training workshops on DNF completion for newly recruited employees and at the beginning of the residency program are recommended on a regular basis. As well, we recommend reviewing the national DNFs to simplify it and make it consistent with updated evidence-based guidelines and recommendation.

  2. Paediatric Refractive Errors in an Eye Clinic in Osogbo, Nigeria.

    PubMed

    Michaeline, Isawumi; Sheriff, Agboola; Bimbo, Ayegoro

    2016-03-01

    Paediatric ophthalmology is an emerging subspecialty in Nigeria and as such there is paucity of data on refractive errors in the country. This study set out to determine the pattern of refractive errors in children attending an eye clinic in South West Nigeria. A descriptive study of 180 consecutive subjects seen over a 2-year period. Presenting complaints, presenting visual acuity (PVA), age and sex were recorded. Clinical examination of the anterior and posterior segments of the eyes, extraocular muscle assessment and refraction were done. The types of refractive errors and their grades were determined. Corrected VA was obtained. Data was analysed using descriptive statistics in proportions, chi square with p value <0.05. The age range of subjects was between 3 and 16 years with mean age = 11.7 and SD = 0.51; with males making up 33.9%.The commonest presenting complaint was blurring of distant vision (40%), presenting visual acuity 6/9 (33.9%), normal vision constituted >75.0%, visual impairment20% and low vision 23.3%. Low grade spherical and cylindrical errors occurred most frequently (35.6% and 59.9% respectively). Regular astigmatism was significantly more common, P <0.001. The commonest diagnosis was simple myopic astigmatism (41.1%). Four cases of strabismus were seen. Simple spherical and cylindrical errors were the commonest types of refractive errors seen. Visual impairment and low vision occurred and could be a cause of absenteeism from school. Low-cost spectacle production or dispensing unit and health education are advocated for the prevention of visual impairment in a hospital set-up.

  3. Effects of Optical Combiner and IPD Change for Convergence on Near-Field Depth Perception in an Optical See-Through HMD.

    PubMed

    Lee, Sangyoon; Hu, Xinda; Hua, Hong

    2016-05-01

    Many error sources have been explored in regards to the depth perception problem in augmented reality environments using optical see-through head-mounted displays (OST-HMDs). Nonetheless, two error sources are commonly neglected: the ray-shift phenomenon and the change in interpupillary distance (IPD). The first source of error arises from the difference in refraction for virtual and see-through optical paths caused by an optical combiner, which is required of OST-HMDs. The second occurs from the change in the viewer's IPD due to eye convergence. In this paper, we analyze the effects of these two error sources on near-field depth perception and propose methods to compensate for these two types of errors. Furthermore, we investigate their effectiveness through an experiment comparing the conditions with and without our error compensation methods applied. In our experiment, participants estimated the egocentric depth of a virtual and a physical object located at seven different near-field distances (40∼200 cm) using a perceptual matching task. Although the experimental results showed different patterns depending on the target distance, the results demonstrated that the near-field depth perception error can be effectively reduced to a very small level (at most 1 percent error) by compensating for the two mentioned error sources.

  4. Amblyopia and refractive errors among school-aged children with low socioeconomic status in southeastern Turkey.

    PubMed

    Caca, Ihsan; Cingu, Abdullah Kursat; Sahin, Alparslan; Ari, Seyhmus; Dursun, Mehmet Emin; Dag, Umut; Balsak, Selahattin; Alakus, Fuat; Yavuz, Abdullah; Palanci, Yilmaz

    2013-01-01

    To investigate the prevalence of refractive errors and other eye diseases, incidence and types of amblyopia in school-aged children, and their relation to gender, age, parental education, and socioeconomic factors. A total of 21,062 children 6 to 14 years old were screened. The examination included visual acuity measurements and ocular motility evaluation. Autorefraction under cycloplegia and examination of the external eye, anterior segment, media, and fundus were performed. There were 11,118 females and 9,944 males. The average age was 10.56 ± 3.59 years. When all of the children were evaluated, 3.2% had myopia and 5.9% had hyperopia. Astigmatism 0.50 D or greater was present in 14.3% of children. Myopia was associated with older age, female gender, and higher parental education. Hyperopia was inversely proportional with older age. Spectacles were needed in 4,476 (22.7%) children with refractive errors, and 10.6% of children were unaware of their spectacle needs. Amblyopia was detected in 2.6% of all children. The most common causes of amblyopia were anisometropia (1.2%) and strabismus (0.9%). Visual impairment is a common disorder in school-aged children. Eye health screening programs are beneficial in early detection and proper treatment of refractive errors. Copyright 2013, SLACK Incorporated.

  5. Effects of sharing information on drug administration errors in pediatric wards: a pre–post intervention study

    PubMed Central

    Chua, Siew-Siang; Choo, Sim-Mei; Sulaiman, Che Zuraini; Omar, Asma; Thong, Meow-Keong

    2017-01-01

    Background and purpose Drug administration errors are more likely to reach the patient than other medication errors. The main aim of this study was to determine whether the sharing of information on drug administration errors among health care providers would reduce such problems. Patients and methods This study involved direct, undisguised observations of drug administrations in two pediatric wards of a major teaching hospital in Kuala Lumpur, Malaysia. This study consisted of two phases: Phase 1 (pre-intervention) and Phase 2 (post-intervention). Data were collected by two observers over a 40-day period in both Phase 1 and Phase 2 of the study. Both observers were pharmacy graduates: Observer 1 just completed her undergraduate pharmacy degree, whereas Observer 2 was doing her one-year internship as a provisionally registered pharmacist in the hospital under study. A drug administration error was defined as a discrepancy between the drug regimen received by the patient and that intended by the prescriber and also drug administration procedures that did not follow standard hospital policies and procedures. Results from Phase 1 of the study were analyzed, presented and discussed with the ward staff before commencement of data collection in Phase 2. Results A total of 1,284 and 1,401 doses of drugs were administered in Phase 1 and Phase 2, respectively. The rate of drug administration errors reduced significantly from Phase 1 to Phase 2 (44.3% versus 28.6%, respectively; P<0.001). Logistic regression analysis showed that the adjusted odds of drug administration errors in Phase 1 of the study were almost three times that in Phase 2 (P<0.001). The most common types of errors were incorrect administration technique and incorrect drug preparation. Nasogastric and intravenous routes of drug administration contributed significantly to the rate of drug administration errors. Conclusion This study showed that sharing of the types of errors that had occurred was significantly associated with a reduction in drug administration errors. PMID:28356748

  6. Some practical problems in implementing randomization.

    PubMed

    Downs, Matt; Tucker, Kathryn; Christ-Schmidt, Heidi; Wittes, Janet

    2010-06-01

    While often theoretically simple, implementing randomization to treatment in a masked, but confirmable, fashion can prove difficult in practice. At least three categories of problems occur in randomization: (1) bad judgment in the choice of method, (2) design and programming errors in implementing the method, and (3) human error during the conduct of the trial. This article focuses on these latter two types of errors, dealing operationally with what can go wrong after trial designers have selected the allocation method. We offer several case studies and corresponding recommendations for lessening the frequency of problems in allocating treatment or for mitigating the consequences of errors. Recommendations include: (1) reviewing the randomization schedule before starting a trial, (2) being especially cautious of systems that use on-demand random number generators, (3) drafting unambiguous randomization specifications, (4) performing thorough testing before entering a randomization system into production, (5) maintaining a dataset that captures the values investigators used to randomize participants, thereby allowing the process of treatment allocation to be reproduced and verified, (6) resisting the urge to correct errors that occur in individual treatment assignments, (7) preventing inadvertent unmasking to treatment assignments in kit allocations, and (8) checking a sample of study drug kits to allow detection of errors in drug packaging and labeling. Although we performed a literature search of documented randomization errors, the examples that we provide and the resultant recommendations are based largely on our own experience in industry-sponsored clinical trials. We do not know how representative our experience is or how common errors of the type we have seen occur. Our experience underscores the importance of verifying the integrity of the treatment allocation process before and during a trial. Clinical Trials 2010; 7: 235-245. http://ctj.sagepub.com.

  7. Risk behaviours for organism transmission in health care delivery-A two month unstructured observational study.

    PubMed

    Lindberg, Maria; Lindberg, Magnus; Skytt, Bernice

    2017-05-01

    Errors in infection control practices risk patient safety. The probability for errors can increase when care practices become more multifaceted. It is therefore fundamental to track risk behaviours and potential errors in various care situations. The aim of this study was to describe care situations involving risk behaviours for organism transmission that could lead to subsequent healthcare-associated infections. Unstructured nonparticipant observations were performed at three medical wards. Healthcare personnel (n=27) were shadowed, in total 39h, on randomly selected weekdays between 7:30 am and 12 noon. Content analysis was used to inductively categorize activities into tasks and based on the character into groups. Risk behaviours for organism transmission were deductively classified into types of errors. Multiple response crosstabs procedure was used to visualize the number and proportion of errors in tasks. One-Way ANOVA with Bonferroni post Hoc test was used to determine differences among the three groups of activities. The qualitative findings gives an understanding of that risk behaviours for organism transmission goes beyond the five moments of hand hygiene and also includes the handling and placement of materials and equipment. The tasks with the highest percentage of errors were; 'personal hygiene', 'elimination' and 'dressing/wound care'. The most common types of errors in all identified tasks were; 'hand disinfection', 'glove usage', and 'placement of materials'. Significantly more errors (p<0.0001) were observed the more multifaceted (single, combined or interrupted) the activity was. The numbers and types of errors as well as the character of activities performed in care situations described in this study confirm the need to improve current infection control practices. It is fundamental that healthcare personnel practice good hand hygiene however effective preventive hygiene is complex in healthcare activities due to the multifaceted care situations, especially when activities are interrupted. A deeper understanding of infection control practices that goes beyond the sense of security by means of hand disinfection and use of gloves is needed as materials and surfaces in the care environment might be contaminated and thus pose a risk for organism transmission. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Identifying Human Factors Issues in Aircraft Maintenance Operations

    NASA Technical Reports Server (NTRS)

    Veinott, Elizabeth S.; Kanki, Barbara G.; Shafto, Michael G. (Technical Monitor)

    1995-01-01

    Maintenance operations incidents submitted to the Aviation Safety Reporting System (ASRS) between 1986-1992 were systematically analyzed in order to identify issues relevant to human factors and crew coordination. This exploratory analysis involved 95 ASRS reports which represented a wide range of maintenance incidents. The reports were coded and analyzed according to the type of error (e.g, wrong part, procedural error, non-procedural error), contributing factors (e.g., individual, within-team, cross-team, procedure, tools), result of the error (e.g., aircraft damage or not) as well as the operational impact (e.g., aircraft flown to destination, air return, delay at gate). The main findings indicate that procedural errors were most common (48.4%) and that individual and team actions contributed to the errors in more than 50% of the cases. As for operational results, most errors were either corrected after landing at the destination (51.6%) or required the flight crew to stop enroute (29.5%). Interactions among these variables are also discussed. This analysis is a first step toward developing a taxonomy of crew coordination problems in maintenance. By understanding what variables are important and how they are interrelated, we may develop intervention strategies that are better tailored to the human factor issues involved.

  9. Simple prescribing errors and allergy documentation in medical hospital admissions in Australia and New Zealand.

    PubMed

    Barton, Lorna; Futtermenger, Judith; Gaddi, Yash; Kang, Angela; Rivers, Jon; Spriggs, David; Jenkins, Paul F; Thompson, Campbell H; Thomas, Josephine S

    2012-04-01

    This study aimed to quantify and compare the prevalence of simple prescribing errors made by clinicians in the first 24 hours of a general medical patient's hospital admission. Four public or private acute care hospitals across Australia and New Zealand each audited 200 patients' drug charts. Patient demographics, pharmacist review and pre-defined prescribing errors were recorded. At least one simple error was present on the medication charts of 672/715 patients, with a linear relationship between the number of medications prescribed and the number of errors (r = 0.571, p < 0.001). The four sites differed significantly in the prevalence of different types of simple prescribing errors. Pharmacists were more likely to review patients aged > or = 75 years (39.9% vs 26.0%; p < 0.001) and those with more than 10 drug prescriptions (39.4% vs 25.7%; p < 0.001). Patients reviewed by a pharmacist were less likely to have inadequate documentation of allergies (13.5% vs 29.4%, p < 0.001). Simple prescribing errors are common, although their nature differs from site to site. Clinical pharmacists target patients with the most complex health situations, and their involvement leads to improved documentation.

  10. Improving probabilistic prediction of daily streamflow by identifying Pareto optimal approaches for modelling heteroscedastic residual errors

    NASA Astrophysics Data System (ADS)

    David, McInerney; Mark, Thyer; Dmitri, Kavetski; George, Kuczera

    2017-04-01

    This study provides guidance to hydrological researchers which enables them to provide probabilistic predictions of daily streamflow with the best reliability and precision for different catchment types (e.g. high/low degree of ephemerality). Reliable and precise probabilistic prediction of daily catchment-scale streamflow requires statistical characterization of residual errors of hydrological models. It is commonly known that hydrological model residual errors are heteroscedastic, i.e. there is a pattern of larger errors in higher streamflow predictions. Although multiple approaches exist for representing this heteroscedasticity, few studies have undertaken a comprehensive evaluation and comparison of these approaches. This study fills this research gap by evaluating 8 common residual error schemes, including standard and weighted least squares, the Box-Cox transformation (with fixed and calibrated power parameter, lambda) and the log-sinh transformation. Case studies include 17 perennial and 6 ephemeral catchments in Australia and USA, and two lumped hydrological models. We find the choice of heteroscedastic error modelling approach significantly impacts on predictive performance, though no single scheme simultaneously optimizes all performance metrics. The set of Pareto optimal schemes, reflecting performance trade-offs, comprises Box-Cox schemes with lambda of 0.2 and 0.5, and the log scheme (lambda=0, perennial catchments only). These schemes significantly outperform even the average-performing remaining schemes (e.g., across ephemeral catchments, median precision tightens from 105% to 40% of observed streamflow, and median biases decrease from 25% to 4%). Theoretical interpretations of empirical results highlight the importance of capturing the skew/kurtosis of raw residuals and reproducing zero flows. Recommendations for researchers and practitioners seeking robust residual error schemes for practical work are provided.

  11. Accounting for measurement error: a critical but often overlooked process.

    PubMed

    Harris, Edward F; Smith, Richard N

    2009-12-01

    Due to instrument imprecision and human inconsistencies, measurements are not free of error. Technical error of measurement (TEM) is the variability encountered between dimensions when the same specimens are measured at multiple sessions. A goal of a data collection regimen is to minimise TEM. The few studies that actually quantify TEM, regardless of discipline, report that it is substantial and can affect results and inferences. This paper reviews some statistical approaches for identifying and controlling TEM. Statistically, TEM is part of the residual ('unexplained') variance in a statistical test, so accounting for TEM, which requires repeated measurements, enhances the chances of finding a statistically significant difference if one exists. The aim of this paper was to review and discuss common statistical designs relating to types of error and statistical approaches to error accountability. This paper addresses issues of landmark location, validity, technical and systematic error, analysis of variance, scaled measures and correlation coefficients in order to guide the reader towards correct identification of true experimental differences. Researchers commonly infer characteristics about populations from comparatively restricted study samples. Most inferences are statistical and, aside from concerns about adequate accounting for known sources of variation with the research design, an important source of variability is measurement error. Variability in locating landmarks that define variables is obvious in odontometrics, cephalometrics and anthropometry, but the same concerns about measurement accuracy and precision extend to all disciplines. With increasing accessibility to computer-assisted methods of data collection, the ease of incorporating repeated measures into statistical designs has improved. Accounting for this technical source of variation increases the chance of finding biologically true differences when they exist.

  12. Quality of Impressions and Work Authorizations Submitted by Dental Students Supervised by Prosthodontists and General Dentists.

    PubMed

    Imbery, Terence A; Diaz, Nicholas; Greenfield, Kristy; Janus, Charles; Best, Al M

    2016-10-01

    Preclinical fixed prosthodontics is taught by Department of Prosthodontics faculty members at Virginia Commonwealth University School of Dentistry; however, 86% of all clinical cases in academic year 2012 were staffed by faculty members from the Department of General Practice. The aims of this retrospective study were to quantify the quality of impressions, accuracy of laboratory work authorizations, and most common errors and to determine if there were differences between the rate of errors in cases supervised by the prosthodontists and the general dentists. A total of 346 Fixed Prosthodontic Laboratory Tracking Sheets for the 2012 academic year were reviewed. The results showed that, overall, 73% of submitted impressions were acceptable at initial evaluation, 16% had to be poured first and re-evaluated for quality prior to pindexing, 7% had multiple impressions submitted for transfer dies, and 4% were rejected for poor quality. There were higher acceptance rates for impressions and work authorizations for cases staffed by prosthodontists than by general dentists, but the differences were not statistically significant (p=0.0584 and p=0.0666, respectively). Regarding the work authorizations, 43% overall did not provide sufficient information or had technical errors that delayed prosthesis fabrication. The most common errors were incorrect mountings, absence of solid casts, inadequate description of margins for porcelain fused to metal crowns, inaccurate die trimming, and margin marking. The percentages of errors in cases supervised by general dentists and prosthodontists were similar for 17 of the 18 types of errors identified; only for margin description was the percentage of errors statistically significantly higher for general dentist-supervised than prosthodontist-supervised cases. These results highlighted the ongoing need for faculty development and calibration to ensure students receive the highest quality education from all faculty members teaching fixed prosthodontics.

  13. An Analysis of 34,218 Pediatric Outpatient Controlled Substance Prescriptions.

    PubMed

    George, Jessica A; Park, Paul S; Hunsberger, Joanne; Shay, Joanne E; Lehmann, Christoph U; White, Elizabeth D; Lee, Benjamin H; Yaster, Myron

    2016-03-01

    Prescription errors are among the most common types of iatrogenic errors. Because of a previously reported 82% error rate in handwritten discharge narcotic prescriptions, we developed a computerized, web-based, controlled substance prescription writer that includes weight-based dosing logic and alerts to reduce the error rate to (virtually) zero. Over the past 7 years, >34,000 prescriptions have been created by hospital providers using this platform. We sought to determine the ongoing efficacy of the program in prescription error reduction and the patterns with which providers prescribe controlled substances for children and young adults (ages 0-21 years) at hospital discharge. We examined a database of 34,218 controlled substance discharge prescriptions written by our institutional providers from January 1, 2007 to February 14, 2014, for demographic information, including age and weight, type of medication prescribed based on patient age, formulation of dispensed medication, and amount of drug to be dispensed at hospital discharge. In addition, we randomly regenerated 2% (700) of prescriptions based on stored data and analyzed them for errors using previously established error criteria. Weights that were manually entered into the prescription writer by the prescriber were compared with the patient's weight in the hospital's electronic medical record. Patients in the database averaged 9 ± 6.1 (range, 0-21) years of age and 36.7 ± 24.9 (1-195) kg. Regardless of age, the most commonly prescribed opioid was oxycodone (73%), which was prescribed as a single agent uncombined with acetaminophen. Codeine was prescribed to 7% of patients and always in a formulation containing acetaminophen. Liquid formulations were prescribed to 98% of children <6 years of age and to 16% of children >12 years of age (the remaining 84% received tablet formulations). Regardless of opioid prescribed, the amount of liquid dispensed averaged 106 ± 125 (range, 2-3240) mL, and the number of tablets dispensed averaged 51 ± 51 (range, 1-1080). Of the subset of 700 regenerated prescriptions, all were legible (drug, amount dispensed, dose, patient demographics, and provider name) and used best prescribing practice (e.g., no trailing zero after a decimal point, leading zero for doses <1). Twenty-five of the 700 (3.6%) had incorrectly entered weights compared with the most recent weight in the chart. Of these, 14 varied by 10% or less and only 2 varied by >15%. Of these, 1 resulted in underdosing (true weight 80 kg prescribed for a weight of 50 kg) and the other in overdosing (true weight 10 kg prescribed for a weight of 30 kg). A computerized prescription writer eliminated most but not all the errors common to handwritten prescriptions. Oxycodone has supplanted codeine as the most commonly prescribed oral opioid in current pediatric pain practice and, independent of formulation, is dispensed in large quantities. This study underscores the need for liquid opioid formulations in the pediatric population and, because of their abuse potential, the urgent need to determine how much of the prescribed medication is actually used by patients.

  14. Using voluntary reports from physicians to learn from diagnostic errors in emergency medicine.

    PubMed

    Okafor, Nnaemeka; Payne, Velma L; Chathampally, Yashwant; Miller, Sara; Doshi, Pratik; Singh, Hardeep

    2016-04-01

    Diagnostic errors are common in the emergency department (ED), but few studies have comprehensively evaluated their types and origins. We analysed incidents reported by ED physicians to determine disease conditions, contributory factors and patient harm associated with ED-related diagnostic errors. Between 1 March 2009 and 31 December 2013, ED physicians reported 509 incidents using a department-specific voluntary incident-reporting system that we implemented at two large academic hospital-affiliated EDs. For this study, we analysed 209 incidents related to diagnosis. A quality assurance team led by an ED physician champion reviewed each incident and interviewed physicians when necessary to confirm the presence/absence of diagnostic error and to determine the contributory factors. We generated descriptive statistics quantifying disease conditions involved, contributory factors and patient harm from errors. Among the 209 incidents, we identified 214 diagnostic errors associated with 65 unique diseases/conditions, including sepsis (9.6%), acute coronary syndrome (9.1%), fractures (8.6%) and vascular injuries (8.6%). Contributory factors included cognitive (n=317), system related (n=192) and non-remedial (n=106). Cognitive factors included faulty information verification (41.3%) and faulty information processing (30.6%) whereas system factors included high workload (34.4%) and inefficient ED processes (40.1%). Non-remediable factors included atypical presentation (31.3%) and the patients' inability to provide a history (31.3%). Most errors (75%) involved multiple factors. Major harm was associated with 34/209 (16.3%) of reported incidents. Most diagnostic errors in ED appeared to relate to common disease conditions. While sustaining diagnostic error reporting programmes might be challenging, our analysis reveals the potential value of such systems in identifying targets for improving patient safety in the ED. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  15. Do Peripheral Refraction and Aberration Profiles Vary with the Type of Myopia? - An Illustration Using a Ray-Tracing Approach

    PubMed Central

    Bakaraju, Ravi C.; Ehrmann, Klaus; Papas, Eric B.; Ho, Arthur

    2010-01-01

    Purpose Myopia is considered to be the most common refractive error occurring in children and young adults, around the world. Motivated to elucidate how the process of emmetropization is disrupted, potentially causing myopia and its progression, researchers have shown great interest in peripheral refraction. This study assessed the effect of the myopia type, either refractive or axial, on peripheral refraction and aberration profiles. Methods Using customized schematic eye models for myopia in a ray tracing algorithm, peripheral aberrations, including the refractive error, were calculated as a function of myopia type. Results In all the selected models, hyperopic shifts in the mean spherical equivalent (MSE) component were found whose magnitude seemed to be largely dependent on the field angle. The MSE profiles showed larger hyperopic shifts for the axial type of myopic models than the refractive ones and were evident in -4 and -6 D prescriptions. Additionally, greater levels of astigmatic component (J180) were also seen in axial-length-dependent models, while refractive models showed higher levels of spherical aberration and coma. Conclusion This study has indicated that myopic eyes with primarily an axial component may have a greater risk of progression than their refractive counterparts albeit with the same degree of refractive error. This prediction emerges from the presented theoretical ray tracing model and, therefore, requires clinical confirmation.

  16. Mathematical Writing Errors in Expository Writings of College Mathematics Students

    ERIC Educational Resources Information Center

    Guce, Ivee K.

    2017-01-01

    Despite the efforts to confirm the effectiveness of writing in learning mathematics, analysis on common errors in mathematical writings has not received sufficient attention. This study aimed to provide an account of the students' procedural explanations in terms of their commonly committed errors in mathematical writing. Nine errors in…

  17. An analysis of errors, discrepancies, and variation in opioid prescriptions for adult outpatients at a teaching hospital

    PubMed Central

    Bicket, Mark C.; Kattail, Deepa; Yaster, Myron; Wu, Christopher L.; Pronovost, Peter

    2017-01-01

    Objective To determine opioid prescribing patterns and rate of three types of errors, discrepancies, and variation from ideal practice. Design Retrospective review of opioid prescriptions processed at an outpatient pharmacy Setting Tertiary institutional medical center Patients We examined 510 consecutive opioid medication prescriptions for adult patients processed at an institutional outpatient pharmacy in June 2016 for patient, provider, and prescription characteristics. Main Outcome Measure(s) We analyzed prescriptions for deviation from best practice guidelines, lack of two patient identifiers, and noncompliance with Drug Enforcement Agency (DEA) rules. Results Mean patient age (SD) was 47.5 years (17.4). The most commonly prescribed opioid was oxycodone (71%), usually not combined with acetaminophen. Practitioners prescribed tablet formulation to 92% of the sample, averaging 57 (47) pills. We identified at least one error on 42% of prescriptions. Among all prescriptions, 9% deviated from best practice guidelines, 21% failed to include two patient identifiers, and 41% were noncompliant with DEA rules. Errors occurred in 89% of handwritten prescriptions, 0% of electronic health record (EHR) computer-generated prescriptions, and 12% of non-EHR computer-generated prescriptions. Inter-rater reliability by kappa was 0.993. Conclusions Inconsistencies in opioid prescribing remain common. Handwritten prescriptions continue to demonstrate higher associations of errors, discrepancies, and variation from ideal practice and government regulations. All computer-generated prescriptions adhered to best practice guidelines and contained two patient identifiers, and all EHR prescriptions were fully compliant with DEA rules. PMID:28345746

  18. A cross "ethnical" comparison of the Driver Behaviour Questionnaire (DBQ) in an economically fast developing country.

    PubMed

    Bener, Abdulbari; Verjee, Mohamud; Dafeeah, Elnour E; Yousafzai, Mohammad T; Mari, Sundus; Hassib, Ahmed; Al-Khatib, Hamza; Choi, Min Kyung; Nema, Noor; Ozkan, Türker; Lajunen, Timo

    2013-05-12

    The aim of this study was to compare the driving behaviours of four ethnic groups and to investigate the relationship between violations, errors and lapses of DBQ and accident involvement in Qatar. The Driver Behaviour Questionnaire (DBQ) was used to measure the aberrant driving behaviours leading to accidents. Of 2400 drivers approached, 1824 drivers agreed to participate (76%) and completed the driver behaviour questionnaire and background information. The study revealed that the majority of the Qatari (35.9%) and Jordanian drivers (37.5%) were below 30 years of age, whereas Filipino (42.3%) and Indian subcontinent (34.1%) drivers were in the age group of 30-39 years. Qatari drivers (52%) were involved in most accidents, followed by Jordanians (48.3%). The most common type of collision was a head on collision, which was similar in all four ethnic groups. The Qatari drivers scored higher on almost all items of violations, errors and lapses compared to other ethnic groups, while Filipino drivers were lower on all the items. The most common violation was the same in all four ethnic groups "Disregard the speed limits on a motorway". The most common error item observed was "Queing to turn right/left on to a main road". "Forget where you left your car" and "Hit something when reversing" were the two lapses identified in factor analysis. The present study identified that Qatari drivers scored higher on most of the items of violations, errors and lapses of DBQ compared to other countries, whereas Filipino drivers scored lower in DBQ items.

  19. Student Beliefs towards Written Corrective Feedback: The Case of Filipino High School Students

    ERIC Educational Resources Information Center

    Balanga, Roselle A.; Fidel, Irish Van B.; Gumapac, Mone Virma Ginry P.; Ho, Howell T.; Tullo, Riza Mae C.; Villaraza, Patricia Monette L.; Vizconde, Camilla J.

    2016-01-01

    The study identified the beliefs of high school students toward Written Corrective Feedback (WCF), based on the framework of Anderson (2010). It also investigated the most common errors that students commit in writing stories and the type of WCF students receive from teachers. Data in the form of stories which were checked by teachers were…

  20. ERM model analysis for adaptation to hydrological model errors

    NASA Astrophysics Data System (ADS)

    Baymani-Nezhad, M.; Han, D.

    2018-05-01

    Hydrological conditions are changed continuously and these phenomenons generate errors on flood forecasting models and will lead to get unrealistic results. Therefore, to overcome these difficulties, a concept called model updating is proposed in hydrological studies. Real-time model updating is one of the challenging processes in hydrological sciences and has not been entirely solved due to lack of knowledge about the future state of the catchment under study. Basically, in terms of flood forecasting process, errors propagated from the rainfall-runoff model are enumerated as the main source of uncertainty in the forecasting model. Hence, to dominate the exciting errors, several methods have been proposed by researchers to update the rainfall-runoff models such as parameter updating, model state updating, and correction on input data. The current study focuses on investigations about the ability of rainfall-runoff model parameters to cope with three types of existing errors, timing, shape and volume as the common errors in hydrological modelling. The new lumped model, the ERM model, has been selected for this study to evaluate its parameters for its use in model updating to cope with the stated errors. Investigation about ten events proves that the ERM model parameters can be updated to cope with the errors without the need to recalibrate the model.

  1. Impact of exposure measurement error in air pollution epidemiology: effect of error type in time-series studies.

    PubMed

    Goldman, Gretchen T; Mulholland, James A; Russell, Armistead G; Strickland, Matthew J; Klein, Mitchel; Waller, Lance A; Tolbert, Paige E

    2011-06-22

    Two distinctly different types of measurement error are Berkson and classical. Impacts of measurement error in epidemiologic studies of ambient air pollution are expected to depend on error type. We characterize measurement error due to instrument imprecision and spatial variability as multiplicative (i.e. additive on the log scale) and model it over a range of error types to assess impacts on risk ratio estimates both on a per measurement unit basis and on a per interquartile range (IQR) basis in a time-series study in Atlanta. Daily measures of twelve ambient air pollutants were analyzed: NO2, NOx, O3, SO2, CO, PM10 mass, PM2.5 mass, and PM2.5 components sulfate, nitrate, ammonium, elemental carbon and organic carbon. Semivariogram analysis was applied to assess spatial variability. Error due to this spatial variability was added to a reference pollutant time-series on the log scale using Monte Carlo simulations. Each of these time-series was exponentiated and introduced to a Poisson generalized linear model of cardiovascular disease emergency department visits. Measurement error resulted in reduced statistical significance for the risk ratio estimates for all amounts (corresponding to different pollutants) and types of error. When modelled as classical-type error, risk ratios were attenuated, particularly for primary air pollutants, with average attenuation in risk ratios on a per unit of measurement basis ranging from 18% to 92% and on an IQR basis ranging from 18% to 86%. When modelled as Berkson-type error, risk ratios per unit of measurement were biased away from the null hypothesis by 2% to 31%, whereas risk ratios per IQR were attenuated (i.e. biased toward the null) by 5% to 34%. For CO modelled error amount, a range of error types were simulated and effects on risk ratio bias and significance were observed. For multiplicative error, both the amount and type of measurement error impact health effect estimates in air pollution epidemiology. By modelling instrument imprecision and spatial variability as different error types, we estimate direction and magnitude of the effects of error over a range of error types.

  2. Knowledge of healthcare professionals about medication errors in hospitals

    PubMed Central

    Abdel-Latif, Mohamed M. M.

    2016-01-01

    Context: Medication errors are the most common types of medical errors in hospitals and leading cause of morbidity and mortality among patients. Aims: The aim of the present study was to assess the knowledge of healthcare professionals about medication errors in hospitals. Settings and Design: A self-administered questionnaire was distributed to randomly selected healthcare professionals in eight hospitals in Madinah, Saudi Arabia. Subjects and Methods: An 18-item survey was designed and comprised questions on demographic data, knowledge of medication errors, availability of reporting systems in hospitals, attitudes toward error reporting, causes of medication errors. Statistical Analysis Used: Data were analyzed with Statistical Package for the Social Sciences software Version 17. Results: A total of 323 of healthcare professionals completed the questionnaire with 64.6% response rate of 138 (42.72%) physicians, 34 (10.53%) pharmacists, and 151 (46.75%) nurses. A majority of the participants had a good knowledge about medication errors concept and their dangers on patients. Only 68.7% of them were aware of reporting systems in hospitals. Healthcare professionals revealed that there was no clear mechanism available for reporting of errors in most hospitals. Prescribing (46.5%) and administration (29%) errors were the main causes of errors. The most frequently encountered medication errors were anti-hypertensives, antidiabetics, antibiotics, digoxin, and insulin. Conclusions: This study revealed differences in the awareness among healthcare professionals toward medication errors in hospitals. The poor knowledge about medication errors emphasized the urgent necessity to adopt appropriate measures to raise awareness about medication errors in Saudi hospitals. PMID:27330261

  3. E-prescribing errors in community pharmacies: exploring consequences and contributing factors.

    PubMed

    Odukoya, Olufunmilola K; Stone, Jamie A; Chui, Michelle A

    2014-06-01

    To explore types of e-prescribing errors in community pharmacies and their potential consequences, as well as the factors that contribute to e-prescribing errors. Data collection involved performing 45 total hours of direct observations in five pharmacies. Follow-up interviews were conducted with 20 study participants. Transcripts from observations and interviews were subjected to content analysis using NVivo 10. Pharmacy staff detected 75 e-prescription errors during the 45 h observation in pharmacies. The most common e-prescribing errors were wrong drug quantity, wrong dosing directions, wrong duration of therapy, and wrong dosage formulation. Participants estimated that 5 in 100 e-prescriptions have errors. Drug classes that were implicated in e-prescribing errors were antiinfectives, inhalers, ophthalmic, and topical agents. The potential consequences of e-prescribing errors included increased likelihood of the patient receiving incorrect drug therapy, poor disease management for patients, additional work for pharmacy personnel, increased cost for pharmacies and patients, and frustrations for patients and pharmacy staff. Factors that contribute to errors included: technology incompatibility between pharmacy and clinic systems, technology design issues such as use of auto-populate features and dropdown menus, and inadvertently entering incorrect information. Study findings suggest that a wide range of e-prescribing errors is encountered in community pharmacies. Pharmacists and technicians perceive that causes of e-prescribing errors are multidisciplinary and multifactorial, that is to say e-prescribing errors can originate from technology used in prescriber offices and pharmacies. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  4. E-Prescribing Errors in Community Pharmacies: Exploring Consequences and Contributing Factors

    PubMed Central

    Stone, Jamie A.; Chui, Michelle A.

    2014-01-01

    Objective To explore types of e-prescribing errors in community pharmacies and their potential consequences, as well as the factors that contribute to e-prescribing errors. Methods Data collection involved performing 45 total hours of direct observations in five pharmacies. Follow-up interviews were conducted with 20 study participants. Transcripts from observations and interviews were subjected to content analysis using NVivo 10. Results Pharmacy staff detected 75 e-prescription errors during the 45 hour observation in pharmacies. The most common e-prescribing errors were wrong drug quantity, wrong dosing directions, wrong duration of therapy, and wrong dosage formulation. Participants estimated that 5 in 100 e-prescriptions have errors. Drug classes that were implicated in e-prescribing errors were antiinfectives, inhalers, ophthalmic, and topical agents. The potential consequences of e-prescribing errors included increased likelihood of the patient receiving incorrect drug therapy, poor disease management for patients, additional work for pharmacy personnel, increased cost for pharmacies and patients, and frustrations for patients and pharmacy staff. Factors that contribute to errors included: technology incompatibility between pharmacy and clinic systems, technology design issues such as use of auto-populate features and dropdown menus, and inadvertently entering incorrect information. Conclusion Study findings suggest that a wide range of e-prescribing errors are encountered in community pharmacies. Pharmacists and technicians perceive that causes of e-prescribing errors are multidisciplinary and multifactorial, that is to say e-prescribing errors can originate from technology used in prescriber offices and pharmacies. PMID:24657055

  5. Analyzing self-controlled case series data when case confirmation rates are estimated from an internal validation sample.

    PubMed

    Xu, Stanley; Clarke, Christina L; Newcomer, Sophia R; Daley, Matthew F; Glanz, Jason M

    2018-05-16

    Vaccine safety studies are often electronic health record (EHR)-based observational studies. These studies often face significant methodological challenges, including confounding and misclassification of adverse event. Vaccine safety researchers use self-controlled case series (SCCS) study design to handle confounding effect and employ medical chart review to ascertain cases that are identified using EHR data. However, for common adverse events, limited resources often make it impossible to adjudicate all adverse events observed in electronic data. In this paper, we considered four approaches for analyzing SCCS data with confirmation rates estimated from an internal validation sample: (1) observed cases, (2) confirmed cases only, (3) known confirmation rate, and (4) multiple imputation (MI). We conducted a simulation study to evaluate these four approaches using type I error rates, percent bias, and empirical power. Our simulation results suggest that when misclassification of adverse events is present, approaches such as observed cases, confirmed case only, and known confirmation rate may inflate the type I error, yield biased point estimates, and affect statistical power. The multiple imputation approach considers the uncertainty of estimated confirmation rates from an internal validation sample, yields a proper type I error rate, largely unbiased point estimate, proper variance estimate, and statistical power. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. New Insights into Handling Missing Values in Environmental Epidemiological Studies

    PubMed Central

    Roda, Célina; Nicolis, Ioannis; Momas, Isabelle; Guihenneuc, Chantal

    2014-01-01

    Missing data are unavoidable in environmental epidemiologic surveys. The aim of this study was to compare methods for handling large amounts of missing values: omission of missing values, single and multiple imputations (through linear regression or partial least squares regression), and a fully Bayesian approach. These methods were applied to the PARIS birth cohort, where indoor domestic pollutant measurements were performed in a random sample of babies' dwellings. A simulation study was conducted to assess performances of different approaches with a high proportion of missing values (from 50% to 95%). Different simulation scenarios were carried out, controlling the true value of the association (odds ratio of 1.0, 1.2, and 1.4), and varying the health outcome prevalence. When a large amount of data is missing, omitting these missing data reduced statistical power and inflated standard errors, which affected the significance of the association. Single imputation underestimated the variability, and considerably increased risk of type I error. All approaches were conservative, except the Bayesian joint model. In the case of a common health outcome, the fully Bayesian approach is the most efficient approach (low root mean square error, reasonable type I error, and high statistical power). Nevertheless for a less prevalent event, the type I error is increased and the statistical power is reduced. The estimated posterior distribution of the OR is useful to refine the conclusion. Among the methods handling missing values, no approach is absolutely the best but when usual approaches (e.g. single imputation) are not sufficient, joint modelling approach of missing process and health association is more efficient when large amounts of data are missing. PMID:25226278

  7. Social and monetary reward learning engage overlapping neural substrates.

    PubMed

    Lin, Alice; Adolphs, Ralph; Rangel, Antonio

    2012-03-01

    Learning to make choices that yield rewarding outcomes requires the computation of three distinct signals: stimulus values that are used to guide choices at the time of decision making, experienced utility signals that are used to evaluate the outcomes of those decisions and prediction errors that are used to update the values assigned to stimuli during reward learning. Here we investigated whether monetary and social rewards involve overlapping neural substrates during these computations. Subjects engaged in two probabilistic reward learning tasks that were identical except that rewards were either social (pictures of smiling or angry people) or monetary (gaining or losing money). We found substantial overlap between the two types of rewards for all components of the learning process: a common area of ventromedial prefrontal cortex (vmPFC) correlated with stimulus value at the time of choice and another common area of vmPFC correlated with reward magnitude and common areas in the striatum correlated with prediction errors. Taken together, the findings support the hypothesis that shared anatomical substrates are involved in the computation of both monetary and social rewards. © The Author (2011). Published by Oxford University Press.

  8. Common but unappreciated sources of error in one, two, and multiple-color pyrometry

    NASA Technical Reports Server (NTRS)

    Spjut, R. Erik

    1988-01-01

    The most common sources of error in optical pyrometry are examined. They can be classified as either noise and uncertainty errors, stray radiation errors, or speed-of-response errors. Through judicious choice of detectors and optical wavelengths the effect of noise errors can be minimized, but one should strive to determine as many of the system properties as possible. Careful consideration of the optical-collection system can minimize stray radiation errors. Careful consideration must also be given to the slowest elements in a pyrometer when measuring rapid phenomena.

  9. Antiretroviral medication prescribing errors are common with hospitalization of HIV-infected patients.

    PubMed

    Commers, Tessa; Swindells, Susan; Sayles, Harlan; Gross, Alan E; Devetten, Marcel; Sandkovsky, Uriel

    2014-01-01

    Errors in prescribing antiretroviral therapy (ART) often occur with the hospitalization of HIV-infected patients. The rapid identification and prevention of errors may reduce patient harm and healthcare-associated costs. A retrospective review of hospitalized HIV-infected patients was carried out between 1 January 2009 and 31 December 2011. Errors were documented as omission, underdose, overdose, duplicate therapy, incorrect scheduling and/or incorrect therapy. The time to error correction was recorded. Relative risks (RRs) were computed to evaluate patient characteristics and error rates. A total of 289 medication errors were identified in 146/416 admissions (35%). The most common was drug omission (69%). At an error rate of 31%, nucleoside reverse transcriptase inhibitors were associated with an increased risk of error when compared with protease inhibitors (RR 1.32; 95% CI 1.04-1.69) and co-formulated drugs (RR 1.59; 95% CI 1.19-2.09). Of the errors, 31% were corrected within the first 24 h, but over half (55%) were never remedied. Admissions with an omission error were 7.4 times more likely to have all errors corrected within 24 h than were admissions without an omission. Drug interactions with ART were detected on 51 occasions. For the study population (n = 177), an increased risk of admission error was observed for black (43%) compared with white (28%) individuals (RR 1.53; 95% CI 1.16-2.03) but no significant differences were observed between white patients and other minorities or between men and women. Errors in inpatient ART were common, and the majority were never detected. The most common errors involved omission of medication, and nucleoside reverse transcriptase inhibitors had the highest rate of prescribing error. Interventions to prevent and correct errors are urgently needed.

  10. Preanalytical Errors in Hematology Laboratory- an Avoidable Incompetence.

    PubMed

    HarsimranKaur, Vikram Narang; Selhi, Pavneet Kaur; Sood, Neena; Singh, Aminder

    2016-01-01

    Quality assurance in the hematology laboratory is a must to ensure laboratory users of reliable test results with high degree of precision and accuracy. Even after so many advances in hematology laboratory practice, pre-analytical errors remain a challenge for practicing pathologists. This study was undertaken with an objective to evaluate the types and frequency of preanalytical errors in hematology laboratory of our center. All the samples received in the Hematology Laboratory of Dayanand Medical College and Hospital, Ludhiana, India over a period of one year (July 2013-July 2014) were included in the study and preanalytical variables like clotted samples, quantity not sufficient, wrong sample, without label, wrong label were studied. Of 471,006 samples received in the laboratory, preanalytical errors, as per the above mentioned categories was found in 1802 samples. The most common error was clotted samples (1332 samples, 0.28% of the total samples) followed by quantity not sufficient (328 sample, 0.06%), wrong sample (96 samples, 0.02%), without label (24 samples, 0.005%) and wrong label (22 samples, 0.005%). Preanalytical errors are frequent in laboratories and can be corrected by regular analysis of the variables involved. Rectification can be done by regular education of the staff.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duchaineau, M.; Wolinsky, M.; Sigeti, D.E.

    Real-time terrain rendering for interactive visualization remains a demanding task. We present a novel algorithm with several advantages over previous methods: our method is unusually stingy with polygons yet achieves real-time performance and is scalable to arbitrary regions and resolutions. The method provides a continuous terrain mesh of specified triangle count having provably minimum error in restricted but reasonably general classes of permissible meshes and error metrics. Our method provides an elegant solution to guaranteeing certain elusive types of consistency in scenes produced by multiple scene generators which share a common finest-resolution database but which otherwise operate entirely independently. Thismore » consistency is achieved by exploiting the freedom of choice of error metric allowed by the algorithm to provide, for example, multiple exact lines-of-sight in real-time. Our methods rely on an off-line pre-processing phase to construct a multi-scale data structure consisting of triangular terrain approximations enhanced ({open_quotes}thickened{close_quotes}) with world-space error information. In real time, this error data is efficiently transformed into screen-space where it is used to guide a greedy top-down triangle subdivision algorithm which produces the desired minimal error continuous terrain mesh. Our algorithm has been implemented and it operates at real-time rates.« less

  12. General error analysis in the relationship between free thyroxine and thyrotropin and its clinical relevance.

    PubMed

    Goede, Simon L; Leow, Melvin Khee-Shing

    2013-01-01

    This treatise investigates error sources in measurements applicable to the hypothalamus-pituitary-thyroid (HPT) system of analysis for homeostatic set point computation. The hypothalamus-pituitary transfer characteristic (HP curve) describes the relationship between plasma free thyroxine [FT4] and thyrotropin [TSH]. We define the origin, types, causes, and effects of errors that are commonly encountered in TFT measurements and examine how we can interpret these to construct a reliable HP function for set point establishment. The error sources in the clinical measurement procedures are identified and analyzed in relation to the constructed HP model. The main sources of measurement and interpretation uncertainties are (1) diurnal variations in [TSH], (2) TFT measurement variations influenced by timing of thyroid medications, (3) error sensitivity in ranges of [TSH] and [FT4] (laboratory assay dependent), (4) rounding/truncation of decimals in [FT4] which in turn amplify curve fitting errors in the [TSH] domain in the lower [FT4] range, (5) memory effects (rate-independent hysteresis effect). When the main uncertainties in thyroid function tests (TFT) are identified and analyzed, we can find the most acceptable model space with which we can construct the best HP function and the related set point area.

  13. [Epidemiology of refractive errors].

    PubMed

    Wolfram, C

    2017-07-01

    Refractive errors are very common and can lead to severe pathological changes in the eye. This article analyzes the epidemiology of refractive errors in the general population in Germany and worldwide and describes common definitions for refractive errors and clinical characteristics for pathologicaal changes. Refractive errors differ between age groups due to refractive changes during the life time and also due to generation-specific factors. Current research about the etiology of refractive errors has strengthened the influence of environmental factors, which led to new strategies for the prevention of refractive pathologies.

  14. Genome-wide association meta-analysis highlights light-induced signaling as a driver for refractive error.

    PubMed

    Tedja, Milly S; Wojciechowski, Robert; Hysi, Pirro G; Eriksson, Nicholas; Furlotte, Nicholas A; Verhoeven, Virginie J M; Iglesias, Adriana I; Meester-Smoor, Magda A; Tompson, Stuart W; Fan, Qiao; Khawaja, Anthony P; Cheng, Ching-Yu; Höhn, René; Yamashiro, Kenji; Wenocur, Adam; Grazal, Clare; Haller, Toomas; Metspalu, Andres; Wedenoja, Juho; Jonas, Jost B; Wang, Ya Xing; Xie, Jing; Mitchell, Paul; Foster, Paul J; Klein, Barbara E K; Klein, Ronald; Paterson, Andrew D; Hosseini, S Mohsen; Shah, Rupal L; Williams, Cathy; Teo, Yik Ying; Tham, Yih Chung; Gupta, Preeti; Zhao, Wanting; Shi, Yuan; Saw, Woei-Yuh; Tai, E-Shyong; Sim, Xue Ling; Huffman, Jennifer E; Polašek, Ozren; Hayward, Caroline; Bencic, Goran; Rudan, Igor; Wilson, James F; Joshi, Peter K; Tsujikawa, Akitaka; Matsuda, Fumihiko; Whisenhunt, Kristina N; Zeller, Tanja; van der Spek, Peter J; Haak, Roxanna; Meijers-Heijboer, Hanne; van Leeuwen, Elisabeth M; Iyengar, Sudha K; Lass, Jonathan H; Hofman, Albert; Rivadeneira, Fernando; Uitterlinden, André G; Vingerling, Johannes R; Lehtimäki, Terho; Raitakari, Olli T; Biino, Ginevra; Concas, Maria Pina; Schwantes-An, Tae-Hwi; Igo, Robert P; Cuellar-Partida, Gabriel; Martin, Nicholas G; Craig, Jamie E; Gharahkhani, Puya; Williams, Katie M; Nag, Abhishek; Rahi, Jugnoo S; Cumberland, Phillippa M; Delcourt, Cécile; Bellenguez, Céline; Ried, Janina S; Bergen, Arthur A; Meitinger, Thomas; Gieger, Christian; Wong, Tien Yin; Hewitt, Alex W; Mackey, David A; Simpson, Claire L; Pfeiffer, Norbert; Pärssinen, Olavi; Baird, Paul N; Vitart, Veronique; Amin, Najaf; van Duijn, Cornelia M; Bailey-Wilson, Joan E; Young, Terri L; Saw, Seang-Mei; Stambolian, Dwight; MacGregor, Stuart; Guggenheim, Jeremy A; Tung, Joyce Y; Hammond, Christopher J; Klaver, Caroline C W

    2018-06-01

    Refractive errors, including myopia, are the most frequent eye disorders worldwide and an increasingly common cause of blindness. This genome-wide association meta-analysis in 160,420 participants and replication in 95,505 participants increased the number of established independent signals from 37 to 161 and showed high genetic correlation between Europeans and Asians (>0.78). Expression experiments and comprehensive in silico analyses identified retinal cell physiology and light processing as prominent mechanisms, and also identified functional contributions to refractive-error development in all cell types of the neurosensory retina, retinal pigment epithelium, vascular endothelium and extracellular matrix. Newly identified genes implicate novel mechanisms such as rod-and-cone bipolar synaptic neurotransmission, anterior-segment morphology and angiogenesis. Thirty-one loci resided in or near regions transcribing small RNAs, thus suggesting a role for post-transcriptional regulation. Our results support the notion that refractive errors are caused by a light-dependent retina-to-sclera signaling cascade and delineate potential pathobiological molecular drivers.

  15. Machine Translation as a Model for Overcoming Some Common Errors in English-into-Arabic Translation among EFL University Freshmen

    ERIC Educational Resources Information Center

    El-Banna, Adel I.; Naeem, Marwa A.

    2016-01-01

    This research work aimed at making use of Machine Translation to help students avoid some syntactic, semantic and pragmatic common errors in translation from English into Arabic. Participants were a hundred and five freshmen who studied the "Translation Common Errors Remedial Program" prepared by the researchers. A testing kit that…

  16. Outpatient CPOE orders discontinued due to 'erroneous entry': prospective survey of prescribers' explanations for errors.

    PubMed

    Hickman, Thu-Trang T; Quist, Arbor Jessica Lauren; Salazar, Alejandra; Amato, Mary G; Wright, Adam; Volk, Lynn A; Bates, David W; Schiff, Gordon

    2018-04-01

    Computerised prescriber order entry (CPOE) systems users often discontinue medications because the initial order was erroneous. To elucidate error types by querying prescribers about their reasons for discontinuing outpatient medication orders that they had self-identified as erroneous. During a nearly 3 year retrospective data collection period, we identified 57 972 drugs discontinued with the reason 'Error (erroneous entry)." Because chart reviews revealed limited information about these errors, we prospectively studied consecutive, discontinued erroneous orders by querying prescribers in near-real-time to learn more about the erroneous orders. From January 2014 to April 2014, we prospectively emailed prescribers about outpatient drug orders that they had discontinued due to erroneous initial order entry. Of 2 50 806 medication orders in these 4 months, 1133 (0.45%) of these were discontinued due to error. From these 1133, we emailed 542 unique prescribers to ask about their reason(s) for discontinuing these mediation orders in error. We received 312 responses (58% response rate). We categorised these responses using a previously published taxonomy. The top reasons for these discontinued erroneous orders included: medication ordered for wrong patient (27.8%, n=60); wrong drug ordered (18.5%, n=40); and duplicate order placed (14.4%, n=31). Other common discontinued erroneous orders related to drug dosage and formulation (eg, extended release versus not). Oxycodone (3%) was the most frequent drug discontinued error. Drugs are not infrequently discontinued 'in error.' Wrong patient and wrong drug errors constitute the leading types of erroneous prescriptions recognised and discontinued by prescribers. Data regarding erroneous medication entries represent an important source of intelligence about how CPOE systems are functioning and malfunctioning, providing important insights regarding areas for designing CPOE more safely in the future. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  17. Assessing Mediational Models: Testing and Interval Estimation for Indirect Effects.

    PubMed

    Biesanz, Jeremy C; Falk, Carl F; Savalei, Victoria

    2010-08-06

    Theoretical models specifying indirect or mediated effects are common in the social sciences. An indirect effect exists when an independent variable's influence on the dependent variable is mediated through an intervening variable. Classic approaches to assessing such mediational hypotheses ( Baron & Kenny, 1986 ; Sobel, 1982 ) have in recent years been supplemented by computationally intensive methods such as bootstrapping, the distribution of the product methods, and hierarchical Bayesian Markov chain Monte Carlo (MCMC) methods. These different approaches for assessing mediation are illustrated using data from Dunn, Biesanz, Human, and Finn (2007). However, little is known about how these methods perform relative to each other, particularly in more challenging situations, such as with data that are incomplete and/or nonnormal. This article presents an extensive Monte Carlo simulation evaluating a host of approaches for assessing mediation. We examine Type I error rates, power, and coverage. We study normal and nonnormal data as well as complete and incomplete data. In addition, we adapt a method, recently proposed in statistical literature, that does not rely on confidence intervals (CIs) to test the null hypothesis of no indirect effect. The results suggest that the new inferential method-the partial posterior p value-slightly outperforms existing ones in terms of maintaining Type I error rates while maximizing power, especially with incomplete data. Among confidence interval approaches, the bias-corrected accelerated (BC a ) bootstrapping approach often has inflated Type I error rates and inconsistent coverage and is not recommended; In contrast, the bootstrapped percentile confidence interval and the hierarchical Bayesian MCMC method perform best overall, maintaining Type I error rates, exhibiting reasonable power, and producing stable and accurate coverage rates.

  18. The Causes of Errors in Clinical Reasoning: Cognitive Biases, Knowledge Deficits, and Dual Process Thinking.

    PubMed

    Norman, Geoffrey R; Monteiro, Sandra D; Sherbino, Jonathan; Ilgen, Jonathan S; Schmidt, Henk G; Mamede, Silvia

    2017-01-01

    Contemporary theories of clinical reasoning espouse a dual processing model, which consists of a rapid, intuitive component (Type 1) and a slower, logical and analytical component (Type 2). Although the general consensus is that this dual processing model is a valid representation of clinical reasoning, the causes of diagnostic errors remain unclear. Cognitive theories about human memory propose that such errors may arise from both Type 1 and Type 2 reasoning. Errors in Type 1 reasoning may be a consequence of the associative nature of memory, which can lead to cognitive biases. However, the literature indicates that, with increasing expertise (and knowledge), the likelihood of errors decreases. Errors in Type 2 reasoning may result from the limited capacity of working memory, which constrains computational processes. In this article, the authors review the medical literature to answer two substantial questions that arise from this work: (1) To what extent do diagnostic errors originate in Type 1 (intuitive) processes versus in Type 2 (analytical) processes? (2) To what extent are errors a consequence of cognitive biases versus a consequence of knowledge deficits?The literature suggests that both Type 1 and Type 2 processes contribute to errors. Although it is possible to experimentally induce cognitive biases, particularly availability bias, the extent to which these biases actually contribute to diagnostic errors is not well established. Educational strategies directed at the recognition of biases are ineffective in reducing errors; conversely, strategies focused on the reorganization of knowledge to reduce errors have small but consistent benefits.

  19. Medication errors in anesthesia: unacceptable or unavoidable?

    PubMed

    Dhawan, Ira; Tewari, Anurag; Sehgal, Sankalp; Sinha, Ashish Chandra

    Medication errors are the common causes of patient morbidity and mortality. It adds financial burden to the institution as well. Though the impact varies from no harm to serious adverse effects including death, it needs attention on priority basis since medication errors' are preventable. In today's world where people are aware and medical claims are on the hike, it is of utmost priority that we curb this issue. Individual effort to decrease medication error alone might not be successful until a change in the existing protocols and system is incorporated. Often drug errors that occur cannot be reversed. The best way to 'treat' drug errors is to prevent them. Wrong medication (due to syringe swap), overdose (due to misunderstanding or preconception of the dose, pump misuse and dilution error), incorrect administration route, under dosing and omission are common causes of medication error that occur perioperatively. Drug omission and calculation mistakes occur commonly in ICU. Medication errors can occur perioperatively either during preparation, administration or record keeping. Numerous human and system errors can be blamed for occurrence of medication errors. The need of the hour is to stop the blame - game, accept mistakes and develop a safe and 'just' culture in order to prevent medication errors. The newly devised systems like VEINROM, a fluid delivery system is a novel approach in preventing drug errors due to most commonly used medications in anesthesia. Similar developments along with vigilant doctors, safe workplace culture and organizational support all together can help prevent these errors. Copyright © 2016. Published by Elsevier Editora Ltda.

  20. Identification and correction of systematic error in high-throughput sequence data

    PubMed Central

    2011-01-01

    Background A feature common to all DNA sequencing technologies is the presence of base-call errors in the sequenced reads. The implications of such errors are application specific, ranging from minor informatics nuisances to major problems affecting biological inferences. Recently developed "next-gen" sequencing technologies have greatly reduced the cost of sequencing, but have been shown to be more error prone than previous technologies. Both position specific (depending on the location in the read) and sequence specific (depending on the sequence in the read) errors have been identified in Illumina and Life Technology sequencing platforms. We describe a new type of systematic error that manifests as statistically unlikely accumulations of errors at specific genome (or transcriptome) locations. Results We characterize and describe systematic errors using overlapping paired reads from high-coverage data. We show that such errors occur in approximately 1 in 1000 base pairs, and that they are highly replicable across experiments. We identify motifs that are frequent at systematic error sites, and describe a classifier that distinguishes heterozygous sites from systematic error. Our classifier is designed to accommodate data from experiments in which the allele frequencies at heterozygous sites are not necessarily 0.5 (such as in the case of RNA-Seq), and can be used with single-end datasets. Conclusions Systematic errors can easily be mistaken for heterozygous sites in individuals, or for SNPs in population analyses. Systematic errors are particularly problematic in low coverage experiments, or in estimates of allele-specific expression from RNA-Seq data. Our characterization of systematic error has allowed us to develop a program, called SysCall, for identifying and correcting such errors. We conclude that correction of systematic errors is important to consider in the design and interpretation of high-throughput sequencing experiments. PMID:22099972

  1. Types of Possible Survey Errors in Estimates Published in the Weekly Natural Gas Storage Report

    EIA Publications

    2016-01-01

    This document lists types of potential errors in EIA estimates published in the WNGSR. Survey errors are an unavoidable aspect of data collection. Error is inherent in all collected data, regardless of the source of the data and the care and competence of data collectors. The type and extent of error depends on the type and characteristics of the survey.

  2. Outlier Removal and the Relation with Reporting Errors and Quality of Psychological Research

    PubMed Central

    Bakker, Marjan; Wicherts, Jelte M.

    2014-01-01

    Background The removal of outliers to acquire a significant result is a questionable research practice that appears to be commonly used in psychology. In this study, we investigated whether the removal of outliers in psychology papers is related to weaker evidence (against the null hypothesis of no effect), a higher prevalence of reporting errors, and smaller sample sizes in these papers compared to papers in the same journals that did not report the exclusion of outliers from the analyses. Methods and Findings We retrieved a total of 2667 statistical results of null hypothesis significance tests from 153 articles in main psychology journals, and compared results from articles in which outliers were removed (N = 92) with results from articles that reported no exclusion of outliers (N = 61). We preregistered our hypotheses and methods and analyzed the data at the level of articles. Results show no significant difference between the two types of articles in median p value, sample sizes, or prevalence of all reporting errors, large reporting errors, and reporting errors that concerned the statistical significance. However, we did find a discrepancy between the reported degrees of freedom of t tests and the reported sample size in 41% of articles that did not report removal of any data values. This suggests common failure to report data exclusions (or missingness) in psychological articles. Conclusions We failed to find that the removal of outliers from the analysis in psychological articles was related to weaker evidence (against the null hypothesis of no effect), sample size, or the prevalence of errors. However, our control sample might be contaminated due to nondisclosure of excluded values in articles that did not report exclusion of outliers. Results therefore highlight the importance of more transparent reporting of statistical analyses. PMID:25072606

  3. Evidence-based anatomical review areas derived from systematic analysis of cases from a radiological departmental discrepancy meeting.

    PubMed

    Chin, S C; Weir-McCall, J R; Yeap, P M; White, R D; Budak, M J; Duncan, G; Oliver, T B; Zealley, I A

    2017-10-01

    To produce short checklists of specific anatomical review sites for different regions of the body based on the frequency of radiological errors reviewed at radiology discrepancy meetings, thereby creating "evidence-based" review areas for radiology reporting. A single centre discrepancy database was retrospectively reviewed from a 5-year period. All errors were classified by type, modality, body system, and specific anatomical location. Errors were assigned to one of four body regions: chest, abdominopelvic, central nervous system (CNS), and musculoskeletal (MSK). Frequencies of errors in anatomical locations were then analysed. There were 561 errors in 477 examinations; 290 (46%) errors occurred in the abdomen/pelvis, 99 (15.7%) in the chest, 117 (18.5%) in the CNS, and 125 (19.9%) in the MSK system. In each body system, the five most common location were chest: lung bases on computed tomography (CT), apices on radiography, pulmonary vasculature, bones, and mediastinum; abdominopelvic: vasculature, colon, kidneys, liver, and pancreas; CNS: intracranial vasculature, peripheral cerebral grey matter, bone, parafalcine, and the frontotemporal lobes surrounding the Sylvian fissure; and MSK: calvarium, sacrum, pelvis, chest, and spine. The five listed locations accounted for >50% of all perceptual errors suggesting an avenue for focused review at the end of reporting. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  4. The causes of and factors associated with prescribing errors in hospital inpatients: a systematic review.

    PubMed

    Tully, Mary P; Ashcroft, Darren M; Dornan, Tim; Lewis, Penny J; Taylor, David; Wass, Val

    2009-01-01

    Prescribing errors are common, they result in adverse events and harm to patients and it is unclear how best to prevent them because recommendations are more often based on surmized rather than empirically collected data. The aim of this systematic review was to identify all informative published evidence concerning the causes of and factors associated with prescribing errors in specialist and non-specialist hospitals, collate it, analyse it qualitatively and synthesize conclusions from it. Seven electronic databases were searched for articles published between 1985-July 2008. The reference lists of all informative studies were searched for additional citations. To be included, a study had to be of handwritten prescriptions for adult or child inpatients that reported empirically collected data on the causes of or factors associated with errors. Publications in languages other than English and studies that evaluated errors for only one disease, one route of administration or one type of prescribing error were excluded. Seventeen papers reporting 16 studies, selected from 1268 papers identified by the search, were included in the review. Studies from the US and the UK in university-affiliated hospitals predominated (10/16 [62%]). The definition of a prescribing error varied widely and the included studies were highly heterogeneous. Causes were grouped according to Reason's model of accident causation into active failures, error-provoking conditions and latent conditions. The active failure most frequently cited was a mistake due to inadequate knowledge of the drug or the patient. Skills-based slips and memory lapses were also common. Where error-provoking conditions were reported, there was at least one per error. These included lack of training or experience, fatigue, stress, high workload for the prescriber and inadequate communication between healthcare professionals. Latent conditions included reluctance to question senior colleagues and inadequate provision of training. Prescribing errors are often multifactorial, with several active failures and error-provoking conditions often acting together to cause them. In the face of such complexity, solutions addressing a single cause, such as lack of knowledge, are likely to have only limited benefit. Further rigorous study, seeking potential ways of reducing error, needs to be conducted. Multifactorial interventions across many parts of the system are likely to be required.

  5. Thirteen year retrospective review of the spectrum of inborn errors of metabolism presenting in a tertiary center in Saudi Arabia.

    PubMed

    Alfadhel, Majid; Benmeakel, Mohammed; Hossain, Mohammad Arif; Al Mutairi, Fuad; Al Othaim, Ali; Alfares, Ahmed A; Al Balwi, Mohammed; Alzaben, Abdullah; Eyaid, Wafaa

    2016-09-15

    Inborn errors of metabolism (IEMs) are individually rare; however, they are collectively common. More than 600 human diseases caused by inborn errors of metabolism are now recognized, and this number is constantly increasing as new concepts and techniques become available for identifying biochemical phenotypes. The aim of this study was to determine the type and distribution of IEMs in patients presenting to a tertiary care center in Saudi Arabia. We conducted a retrospective review of children diagnosed with IEMs presenting to the Pediatric Department of King Abdulaziz Medical City in Riyadh, Saudi Arabia over a 13-year period. Over the 13- year period of this retrospective cohort, the total number of live births reached 110,601. A total of 187 patients were diagnosed with IEMs, representing a incidence of 169 in 100,000 births (1:591). Of these, 121 patients (64.7 %) were identified to have small molecule diseases and 66 (35.3 %) to have large molecule diseases. Organic acidemias were the most common small molecule IEMs, while lysosomal storage disorders (LSD) were the most common large molecule diseases. Sphingolipidosis were the most common LSD. Our study confirms the previous results of the high rate of IEMs in Saudi Arabia and urges the health care strategists in the country to devise a long-term strategic plan, including an IEM national registry and a high school carrier screening program, for the prevention of such disorders. In addition, we identified 43 novel mutations that were not described previously, which will help in the molecular diagnosis of these disorders.

  6. Pattern of refractive errors among the Nepalese population: a retrospective study.

    PubMed

    Shrestha, S P; Bhat, K S; Binu, V S; Barthakur, R; Natarajan, M; Subba, S H

    2010-01-01

    Refractive errors are a major cause of visual impairment in the population. To find the pattern of refractive errors among patients evaluated in a tertiary care hospital in the western region of Nepal. The present hospital-based retrospective study was conducted in the Department of Ophthalmology of the Manipal Teaching Hospital, situated in Pokhara, Nepal. Patients who had refractive error of at least 0.5 D (dioptre) were included for the study. During the study period, 15,410 patients attended the outpatient department and 10.8% of the patients were identified as having refractive error. The age of the patients in the present study ranged between 5 - 90 years. Myopia was the commonest refractive error followed by hypermetropia. There was no difference in the frequency of the type of refractive errors when they were defined using right the eye, the left eye or both the eyes. Males predominated among myopics and females predominated among hypermetropics. The majority of spherical errors was less than or equal to 2 D. Astigmatic power above 1D was rarely seen with hypermetropic astigmatism and was seen in around 13 % with myopic astigmatism. "Astigmatism against the rule" was more common than "astigmatism with the rule", irrespective of age. Refractive errors progressively shift along myopia up to the third decade and change to hypermetropia till the seventh decade. Hyperopic shift in the refractive error in young adults should be well noted while planning any refractive surgery in younger patients with myopia. © Nepal Ophthalmic Society.

  7. Errors in otology.

    PubMed

    Kartush, J M

    1996-11-01

    Practicing medicine successfully requires that errors in diagnosis and treatment be minimized. Malpractice laws encourage litigators to ascribe all medical errors to incompetence and negligence. There are, however, many other causes of unintended outcomes. This article describes common causes of errors and suggests ways to minimize mistakes in otologic practice. Widespread dissemination of knowledge about common errors and their precursors can reduce the incidence of their occurrence. Consequently, laws should be passed to allow for a system of non-punitive, confidential reporting of errors and "near misses" that can be shared by physicians nationwide.

  8. Prevalence and reporting of recruitment, randomisation and treatment errors in clinical trials: A systematic review.

    PubMed

    Yelland, Lisa N; Kahan, Brennan C; Dent, Elsa; Lee, Katherine J; Voysey, Merryn; Forbes, Andrew B; Cook, Jonathan A

    2018-06-01

    Background/aims In clinical trials, it is not unusual for errors to occur during the process of recruiting, randomising and providing treatment to participants. For example, an ineligible participant may inadvertently be randomised, a participant may be randomised in the incorrect stratum, a participant may be randomised multiple times when only a single randomisation is permitted or the incorrect treatment may inadvertently be issued to a participant at randomisation. Such errors have the potential to introduce bias into treatment effect estimates and affect the validity of the trial, yet there is little motivation for researchers to report these errors and it is unclear how often they occur. The aim of this study is to assess the prevalence of recruitment, randomisation and treatment errors and review current approaches for reporting these errors in trials published in leading medical journals. Methods We conducted a systematic review of individually randomised, phase III, randomised controlled trials published in New England Journal of Medicine, Lancet, Journal of the American Medical Association, Annals of Internal Medicine and British Medical Journal from January to March 2015. The number and type of recruitment, randomisation and treatment errors that were reported and how they were handled were recorded. The corresponding authors were contacted for a random sample of trials included in the review and asked to provide details on unreported errors that occurred during their trial. Results We identified 241 potentially eligible articles, of which 82 met the inclusion criteria and were included in the review. These trials involved a median of 24 centres and 650 participants, and 87% involved two treatment arms. Recruitment, randomisation or treatment errors were reported in 32 in 82 trials (39%) that had a median of eight errors. The most commonly reported error was ineligible participants inadvertently being randomised. No mention of recruitment, randomisation or treatment errors was found in the remaining 50 of 82 trials (61%). Based on responses from 9 of the 15 corresponding authors who were contacted regarding recruitment, randomisation and treatment errors, between 1% and 100% of the errors that occurred in their trials were reported in the trial publications. Conclusion Recruitment, randomisation and treatment errors are common in individually randomised, phase III trials published in leading medical journals, but reporting practices are inadequate and reporting standards are needed. We recommend researchers report all such errors that occurred during the trial and describe how they were handled in trial publications to improve transparency in reporting of clinical trials.

  9. A Cross “Ethnical” Comparison of the Driver Behaviour Questionnaire (DBQ) in an Economically Fast Developing Country

    PubMed Central

    Bener, Abdulbari; Verjee, Mohamud; Dafeeah, Elnour E.; Yousafzai, Mohammad T.; Mari, Sundus; Hassib, Ahmed; Al-Khatib, Hamza; Choi, Min Kyung; Nema, Noor; Özkan, Türker; Lajunen, Timo

    2013-01-01

    Aim: The aim of this study was to compare the driving behaviours of four ethnic groups and to investigate the relationship between violations, errors and lapses of DBQ and accident involvement in Qatar. Subjects and Methods: The Driver Behaviour Questionnaire (DBQ) was used to measure the aberrant driving behaviours leading to accidents. Of 2400 drivers approached, 1824 drivers agreed to participate (76%) and completed the driver behaviour questionnaire and background information. Results: The study revealed that the majority of the Qatari (35.9%) and Jordanian drivers (37.5%) were below 30 years of age, whereas Filipino (42.3%) and Indian subcontinent (34.1%) drivers were in the age group of 30-39 years. Qatari drivers (52%) were involved in most accidents, followed by Jordanians (48.3%). The most common type of collision was a head on collision, which was similar in all four ethnic groups. The Qatari drivers scored higher on almost all items of violations, errors and lapses compared to other ethnic groups, while Filipino drivers were lower on all the items. The most common violation was the same in all four ethnic groups “Disregard the speed limits on a motorway”. The most common error item observed was “Queing to turn right/left on to a main road”. “Forget where you left your car” and “Hit something when reversing” were the two lapses identified in factor analysis. Conclusion: The present study identified that Qatari drivers scored higher on most of the items of violations, errors and lapses of DBQ compared to other countries, whereas Filipino drivers scored lower in DBQ items. PMID:23777732

  10. Cognitive bias in clinical practice - nurturing healthy skepticism among medical students.

    PubMed

    Bhatti, Alysha

    2018-01-01

    Errors in clinical reasoning, known as cognitive biases, are implicated in a significant proportion of diagnostic errors. Despite this knowledge, little emphasis is currently placed on teaching cognitive psychology in the undergraduate medical curriculum. Understanding the origin of these biases and their impact on clinical decision making helps stimulate reflective practice. This article outlines some of the common types of cognitive biases encountered in the clinical setting as well as cognitive debiasing strategies. Medical educators should nurture healthy skepticism among medical students by raising awareness of cognitive biases and equipping them with robust tools to circumvent such biases. This will enable tomorrow's doctors to improve the quality of care delivered, thus optimizing patient outcomes.

  11. Calibrated Bayes Factors Should Not Be Used: A Reply to Hoijtink, van Kooten, and Hulsker.

    PubMed

    Morey, Richard D; Wagenmakers, Eric-Jan; Rouder, Jeffrey N

    2016-01-01

    Hoijtink, Kooten, and Hulsker ( 2016 ) present a method for choosing the prior distribution for an analysis with Bayes factor that is based on controlling error rates, which they advocate as an alternative to our more subjective methods (Morey & Rouder, 2014 ; Rouder, Speckman, Sun, Morey, & Iverson, 2009 ; Wagenmakers, Wetzels, Borsboom, & van der Maas, 2011 ). We show that the method they advocate amounts to a simple significance test, and that the resulting Bayes factors are not interpretable. Additionally, their method fails in common circumstances, and has the potential to yield arbitrarily high Type II error rates. After critiquing their method, we outline the position on subjectivity that underlies our advocacy of Bayes factors.

  12. Minimum number of clusters and comparison of analysis methods for cross sectional stepped wedge cluster randomised trials with binary outcomes: A simulation study.

    PubMed

    Barker, Daniel; D'Este, Catherine; Campbell, Michael J; McElduff, Patrick

    2017-03-09

    Stepped wedge cluster randomised trials frequently involve a relatively small number of clusters. The most common frameworks used to analyse data from these types of trials are generalised estimating equations and generalised linear mixed models. A topic of much research into these methods has been their application to cluster randomised trial data and, in particular, the number of clusters required to make reasonable inferences about the intervention effect. However, for stepped wedge trials, which have been claimed by many researchers to have a statistical power advantage over the parallel cluster randomised trial, the minimum number of clusters required has not been investigated. We conducted a simulation study where we considered the most commonly used methods suggested in the literature to analyse cross-sectional stepped wedge cluster randomised trial data. We compared the per cent bias, the type I error rate and power of these methods in a stepped wedge trial setting with a binary outcome, where there are few clusters available and when the appropriate adjustment for a time trend is made, which by design may be confounding the intervention effect. We found that the generalised linear mixed modelling approach is the most consistent when few clusters are available. We also found that none of the common analysis methods for stepped wedge trials were both unbiased and maintained a 5% type I error rate when there were only three clusters. Of the commonly used analysis approaches, we recommend the generalised linear mixed model for small stepped wedge trials with binary outcomes. We also suggest that in a stepped wedge design with three steps, at least two clusters be randomised at each step, to ensure that the intervention effect estimator maintains the nominal 5% significance level and is also reasonably unbiased.

  13. Medication errors in paediatric care: a systematic review of epidemiology and an evaluation of evidence supporting reduction strategy recommendations

    PubMed Central

    Miller, Marlene R; Robinson, Karen A; Lubomski, Lisa H; Rinke, Michael L; Pronovost, Peter J

    2007-01-01

    Background Although children are at the greatest risk for medication errors, little is known about the overall epidemiology of these errors, where the gaps are in our knowledge, and to what extent national medication error reduction strategies focus on children. Objective To synthesise peer reviewed knowledge on children's medication errors and on recommendations to improve paediatric medication safety by a systematic literature review. Data sources PubMed, Embase and Cinahl from 1 January 2000 to 30 April 2005, and 11 national entities that have disseminated recommendations to improve medication safety. Study selection Inclusion criteria were peer reviewed original data in English language. Studies that did not separately report paediatric data were excluded. Data extraction Two reviewers screened articles for eligibility and for data extraction, and screened all national medication error reduction strategies for relevance to children. Data synthesis From 358 articles identified, 31 were included for data extraction. The definition of medication error was non‐uniform across the studies. Dispensing and administering errors were the most poorly and non‐uniformly evaluated. Overall, the distributional epidemiological estimates of the relative percentages of paediatric error types were: prescribing 3–37%, dispensing 5–58%, administering 72–75%, and documentation 17–21%. 26 unique recommendations for strategies to reduce medication errors were identified; none were based on paediatric evidence. Conclusions Medication errors occur across the entire spectrum of prescribing, dispensing, and administering, are common, and have a myriad of non‐evidence based potential reduction strategies. Further research in this area needs a firmer standardisation for items such as dose ranges and definitions of medication errors, broader scope beyond inpatient prescribing errors, and prioritisation of implementation of medication error reduction strategies. PMID:17403758

  14. Stroke type differentiation using spectrally constrained multifrequency EIT: evaluation of feasibility in a realistic head model.

    PubMed

    Malone, Emma; Jehl, Markus; Arridge, Simon; Betcke, Timo; Holder, David

    2014-06-01

    We investigate the application of multifrequency electrical impedance tomography (MFEIT) to imaging the brain in stroke patients. The use of MFEIT could enable early diagnosis and thrombolysis of ischaemic stroke, and therefore improve the outcome of treatment. Recent advances in the imaging methodology suggest that the use of spectral constraints could allow for the reconstruction of a one-shot image. We performed a simulation study to investigate the feasibility of imaging stroke in a head model with realistic conductivities. We introduced increasing levels of modelling errors to test the robustness of the method to the most common sources of artefact. We considered the case of errors in the electrode placement, spectral constraints, and contact impedance. The results indicate that errors in the position and shape of the electrodes can affect image quality, although our imaging method was successful in identifying tissues with sufficiently distinct spectra.

  15. Errors Analysis of Solving Linear Inequalities among the Preparatory Year Students at King Saud University

    ERIC Educational Resources Information Center

    El-khateeb, Mahmoud M. A.

    2016-01-01

    The purpose of this study aims to investigate the errors classes occurred by the Preparatory year students at King Saud University, through analysis student responses to the items of the study test, and to identify the varieties of the common errors and ratios of common errors that occurred in solving inequalities. In the collection of the data,…

  16. Benchmarking observational uncertainties for hydrology (Invited)

    NASA Astrophysics Data System (ADS)

    McMillan, H. K.; Krueger, T.; Freer, J. E.; Westerberg, I.

    2013-12-01

    There is a pressing need for authoritative and concise information on the expected error distributions and magnitudes in hydrological data, to understand its information content. Many studies have discussed how to incorporate uncertainty information into model calibration and implementation, and shown how model results can be biased if uncertainty is not appropriately characterised. However, it is not always possible (for example due to financial or time constraints) to make detailed studies of uncertainty for every research study. Instead, we propose that the hydrological community could benefit greatly from sharing information on likely uncertainty characteristics and the main factors that control the resulting magnitude. In this presentation, we review the current knowledge of uncertainty for a number of key hydrological variables: rainfall, flow and water quality (suspended solids, nitrogen, phosphorus). We collated information on the specifics of the data measurement (data type, temporal and spatial resolution), error characteristics measured (e.g. standard error, confidence bounds) and error magnitude. Our results were primarily split by data type. Rainfall uncertainty was controlled most strongly by spatial scale, flow uncertainty was controlled by flow state (low, high) and gauging method. Water quality presented a more complex picture with many component errors. For all variables, it was easy to find examples where relative error magnitude exceeded 40%. We discuss some of the recent developments in hydrology which increase the need for guidance on typical error magnitudes, in particular when doing comparative/regionalisation and multi-objective analysis. Increased sharing of data, comparisons between multiple catchments, and storage in national/international databases can mean that data-users are far removed from data collection, but require good uncertainty information to reduce bias in comparisons or catchment regionalisation studies. Recently it has become more common for hydrologists to use multiple data types and sources within a single study. This may be driven by complex water management questions which integrate water quantity, quality and ecology; or by recognition of the value of auxiliary data to understand hydrological processes. We discuss briefly the impact of data uncertainty on the increasingly popular use of diagnostic signatures for hydrological process understanding and model development.

  17. Simultaneous Control of Error Rates in fMRI Data Analysis

    PubMed Central

    Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David

    2015-01-01

    The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the Likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to ‘cleaner’ looking brain maps and operationally superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain. PMID:26272730

  18. Hospital-based transfusion error tracking from 2005 to 2010: identifying the key errors threatening patient transfusion safety.

    PubMed

    Maskens, Carolyn; Downie, Helen; Wendt, Alison; Lima, Ana; Merkley, Lisa; Lin, Yulia; Callum, Jeannie

    2014-01-01

    This report provides a comprehensive analysis of transfusion errors occurring at a large teaching hospital and aims to determine key errors that are threatening transfusion safety, despite implementation of safety measures. Errors were prospectively identified from 2005 to 2010. Error data were coded on a secure online database called the Transfusion Error Surveillance System. Errors were defined as any deviation from established standard operating procedures. Errors were identified by clinical and laboratory staff. Denominator data for volume of activity were used to calculate rates. A total of 15,134 errors were reported with a median number of 215 errors per month (range, 85-334). Overall, 9083 (60%) errors occurred on the transfusion service and 6051 (40%) on the clinical services. In total, 23 errors resulted in patient harm: 21 of these errors occurred on the clinical services and two in the transfusion service. Of the 23 harm events, 21 involved inappropriate use of blood. Errors with no harm were 657 times more common than events that caused harm. The most common high-severity clinical errors were sample labeling (37.5%) and inappropriate ordering of blood (28.8%). The most common high-severity error in the transfusion service was sample accepted despite not meeting acceptance criteria (18.3%). The cost of product and component loss due to errors was $593,337. Errors occurred at every point in the transfusion process, with the greatest potential risk of patient harm resulting from inappropriate ordering of blood products and errors in sample labeling. © 2013 American Association of Blood Banks (CME).

  19. Measurement error and outcome distributions: Methodological issues in regression analyses of behavioral coding data.

    PubMed

    Holsclaw, Tracy; Hallgren, Kevin A; Steyvers, Mark; Smyth, Padhraic; Atkins, David C

    2015-12-01

    Behavioral coding is increasingly used for studying mechanisms of change in psychosocial treatments for substance use disorders (SUDs). However, behavioral coding data typically include features that can be problematic in regression analyses, including measurement error in independent variables, non normal distributions of count outcome variables, and conflation of predictor and outcome variables with third variables, such as session length. Methodological research in econometrics has shown that these issues can lead to biased parameter estimates, inaccurate standard errors, and increased Type I and Type II error rates, yet these statistical issues are not widely known within SUD treatment research, or more generally, within psychotherapy coding research. Using minimally technical language intended for a broad audience of SUD treatment researchers, the present paper illustrates the nature in which these data issues are problematic. We draw on real-world data and simulation-based examples to illustrate how these data features can bias estimation of parameters and interpretation of models. A weighted negative binomial regression is introduced as an alternative to ordinary linear regression that appropriately addresses the data characteristics common to SUD treatment behavioral coding data. We conclude by demonstrating how to use and interpret these models with data from a study of motivational interviewing. SPSS and R syntax for weighted negative binomial regression models is included in online supplemental materials. (c) 2016 APA, all rights reserved).

  20. Measurement error and outcome distributions: Methodological issues in regression analyses of behavioral coding data

    PubMed Central

    Holsclaw, Tracy; Hallgren, Kevin A.; Steyvers, Mark; Smyth, Padhraic; Atkins, David C.

    2015-01-01

    Behavioral coding is increasingly used for studying mechanisms of change in psychosocial treatments for substance use disorders (SUDs). However, behavioral coding data typically include features that can be problematic in regression analyses, including measurement error in independent variables, non-normal distributions of count outcome variables, and conflation of predictor and outcome variables with third variables, such as session length. Methodological research in econometrics has shown that these issues can lead to biased parameter estimates, inaccurate standard errors, and increased type-I and type-II error rates, yet these statistical issues are not widely known within SUD treatment research, or more generally, within psychotherapy coding research. Using minimally-technical language intended for a broad audience of SUD treatment researchers, the present paper illustrates the nature in which these data issues are problematic. We draw on real-world data and simulation-based examples to illustrate how these data features can bias estimation of parameters and interpretation of models. A weighted negative binomial regression is introduced as an alternative to ordinary linear regression that appropriately addresses the data characteristics common to SUD treatment behavioral coding data. We conclude by demonstrating how to use and interpret these models with data from a study of motivational interviewing. SPSS and R syntax for weighted negative binomial regression models is included in supplementary materials. PMID:26098126

  1. A survey of the prevalence of refractive errors among children in lower primary schools in Kampala district.

    PubMed

    Kawuma, Medi; Mayeku, Robert

    2002-08-01

    Refractive errors are a known cause of visual impairment and may cause blindness worldwide. In children, refractive errors may prevent those afflicted from progressing with their studies. In Uganda, like in many developing countries, there is no established vision-screening programme for children on commencement of school, such that those with early onset of such errors will have many years of poor vision. Over all, there is limited information on refractive errors among children in Africa. To determine the prevalence of refractive errors among school children attending lower primary in Kampala district; the frequency of the various types of refractive errors, and their relationship to sexuality and ethnicity. A cross-sectional descriptive study. Kampala district, Uganda A total of 623 children aged between 6 and 9 years had a visual acuity testing done at school using the same protocol; of these 301 (48.3%) were boys and 322 (51.7%) girls. Seventy-three children had a significant refractive error of +/-0.50 or worse in one or both eyes, giving a prevalence of 11.6% and the commonest single refractive error was astigmatism, which accounted for 52% of all errors. This was followed by hypermetropia, and myopia was the least common. Significant refractive errors occur among primary school children aged 6 to 9 years at a prevalence of approximately 12%. Therefore, there is a need to have regular and simple vision testing in primary school children at least at the commencement of school so as to defect those who may suffer from these disabilities.

  2. Understanding the nature of errors in nursing: using a model to analyse critical incident reports of errors which had resulted in an adverse or potentially adverse event.

    PubMed

    Meurier, C E

    2000-07-01

    Human errors are common in clinical practice, but they are under-reported. As a result, very little is known of the types, antecedents and consequences of errors in nursing practice. This limits the potential to learn from errors and to make improvement in the quality and safety of nursing care. The aim of this study was to use an Organizational Accident Model to analyse critical incidents of errors in nursing. Twenty registered nurses were invited to produce a critical incident report of an error (which had led to an adverse event or potentially could have led to an adverse event) they had made in their professional practice and to write down their responses to the error using a structured format. Using Reason's Organizational Accident Model, supplemental information was then collected from five of the participants by means of an individual in-depth interview to explore further issues relating to the incidents they had reported. The detailed analysis of one of the incidents is discussed in this paper, demonstrating the effectiveness of this approach in providing insight into the chain of events which may lead to an adverse event. The case study approach using critical incidents of clinical errors was shown to provide relevant information regarding the interaction of organizational factors, local circumstances and active failures (errors) in producing an adverse or potentially adverse event. It is suggested that more use should be made of this approach to understand how errors are made in practice and to take appropriate preventative measures.

  3. Examining Power and Type 1 Error for Step and Item Level Tests of Invariance: Investigating the Effect of the Number of Item Score Levels

    ERIC Educational Resources Information Center

    Ayodele, Alicia Nicole

    2017-01-01

    Within polytomous items, differential item functioning (DIF) can take on various forms due to the number of response categories. The lack of invariance at this level is referred to as differential step functioning (DSF). The most common DSF methods in the literature are the adjacent category log odds ratio (AC-LOR) estimator and cumulative…

  4. NRL Radar Division C++ Coding Standard

    DTIC Science & Technology

    2016-12-05

    The coding standard provides tools aimed at helping C++ programmers develop programs that are free of common types of errors, maintainable by...different programmers , portable to other operating systems, easy to read and understand, and have a consistent style. Questions of design, such as how to...mandatory for any organization with quality goals. The purpose of this standard is to provide tools aimed at helping C++ programmers develop programs that

  5. Associations of hallucination proneness with free-recall intrusions and response bias in a nonclinical sample.

    PubMed

    Brébion, Gildas; Larøi, Frank; Van der Linden, Martial

    2010-10-01

    Hallucinations in patients with schizophrenia have been associated with a liberal response bias in signal detection and recognition tasks and with various types of source-memory error. We investigated the associations of hallucination proneness with free-recall intrusions and false recognitions of words in a nonclinical sample. A total of 81 healthy individuals were administered a verbal memory task involving free recall and recognition of one nonorganizable and one semantically organizable list of words. Hallucination proneness was assessed by means of a self-rating scale. Global hallucination proneness was associated with free-recall intrusions in the nonorganizable list and with a response bias reflecting tendency to make false recognitions of nontarget words in both types of list. The verbal hallucination score was associated with more intrusions and with a reduced tendency to make false recognitions of words. The associations between global hallucination proneness and two types of verbal memory error in a nonclinical sample corroborate those observed in patients with schizophrenia and suggest that common cognitive mechanisms underlie hallucinations in psychiatric and nonclinical individuals.

  6. Unified Computational Methods for Regression Analysis of Zero-Inflated and Bound-Inflated Data

    PubMed Central

    Yang, Yan; Simpson, Douglas

    2010-01-01

    Bounded data with excess observations at the boundary are common in many areas of application. Various individual cases of inflated mixture models have been studied in the literature for bound-inflated data, yet the computational methods have been developed separately for each type of model. In this article we use a common framework for computing these models, and expand the range of models for both discrete and semi-continuous data with point inflation at the lower boundary. The quasi-Newton and EM algorithms are adapted and compared for estimation of model parameters. The numerical Hessian and generalized Louis method are investigated as means for computing standard errors after optimization. Correlated data are included in this framework via generalized estimating equations. The estimation of parameters and effectiveness of standard errors are demonstrated through simulation and in the analysis of data from an ultrasound bioeffect study. The unified approach enables reliable computation for a wide class of inflated mixture models and comparison of competing models. PMID:20228950

  7. Medication errors: problems and recommendations from a consensus meeting

    PubMed Central

    Agrawal, Abha; Aronson, Jeffrey K; Britten, Nicky; Ferner, Robin E; de Smet, Peter A; Fialová, Daniela; Fitzgerald, Richard J; Likić, Robert; Maxwell, Simon R; Meyboom, Ronald H; Minuz, Pietro; Onder, Graziano; Schachter, Michael; Velo, Giampaolo

    2009-01-01

    Here we discuss 15 recommendations for reducing the risks of medication errors: Provision of sufficient undergraduate learning opportunities to make medical students safe prescribers. Provision of opportunities for students to practise skills that help to reduce errors. Education of students about common types of medication errors and how to avoid them. Education of prescribers in taking accurate drug histories. Assessment in medical schools of prescribing knowledge and skills and demonstration that newly qualified doctors are safe prescribers. European harmonization of prescribing and safety recommendations and regulatory measures, with regular feedback about rational drug use. Comprehensive assessment of elderly patients for declining function. Exploration of low-dose regimens for elderly patients and preparation of special formulations as required. Training for all health-care professionals in drug use, adverse effects, and medication errors in elderly people. More involvement of pharmacists in clinical practice. Introduction of integrated prescription forms and national implementation in individual countries. Development of better monitoring systems for detecting medication errors, based on classification and analysis of spontaneous reports of previous reactions, and for investigating the possible role of medication errors when patients die. Use of IT systems, when available, to provide methods of avoiding medication errors; standardization, proper evaluation, and certification of clinical information systems. Nonjudgmental communication with patients about their concerns and elicitation of symptoms that they perceive to be adverse drug reactions. Avoidance of defensive reactions if patients mention symptoms resulting from medication errors. PMID:19594525

  8. Retrospective analysis of refractive errors in children with vision impairment.

    PubMed

    Du, Jojo W; Schmid, Katrina L; Bevan, Jennifer D; Frater, Karen M; Ollett, Rhondelle; Hein, Bronwyn

    2005-09-01

    Emmetropization is the reduction in neonatal refractive errors that occurs after birth. Ocular disease may affect this process. We aimed to determine the relative frequency of ocular conditions causing vision impairment in the pediatric population and characterize the refractive anomalies present. We also compared the causes of vision impairment in children today to those between 1974 and 1981. Causes of vision impairment and refractive data of 872 children attending a pediatric low-vision clinic from 1985 to 2002 were retrospectively collated. As a result of associated impairments, refractive data were not available for 59 children. An analysis was made of the causes of vision impairment, the distribution of refractive errors in children with vision impairment, and the average type of refractive error for the most commonly seen conditions. We found that cortical or cerebral vision impairment (CVI) was the most common condition causing vision impairment, accounting for 27.6% of cases. This was followed by albinism (10.6%), retinopathy of prematurity (ROP; 7.0%), optic atrophy (6.2%), and optic nerve hypoplasia (5.3%). Vision impairment was associated with ametropia; fewer than 25% of the children had refractive errors < or = +/-1 D. The refractive error frequency plots (for 0 to 2-, 6 to 8-, and 12 to 14-year age bands) had a Gaussian distribution indicating that the emmetropization process was abnormal. The mean spherical equivalent refractive error of the children (n = 813) was +0.78 +/- 6.00 D with 0.94 +/- 1.24 D of astigmatism and 0.92 +/- 2.15 D of anisometropia. Most conditions causing vision impairment such as albinism were associated with low amounts of hyperopia. Moderate myopia was observed in children with ROP. The relative frequency of ocular conditions causing vision impairment in children has changed since the 1970s. Children with vision impairment often have an associated ametropia suggesting that the emmetropization system is also impaired.

  9. Robustness of meta-analyses in finding gene × environment interactions

    PubMed Central

    Shi, Gang; Nehorai, Arye

    2017-01-01

    Meta-analyses that synthesize statistical evidence across studies have become important analytical tools for genetic studies. Inspired by the success of genome-wide association studies of the genetic main effect, researchers are searching for gene × environment interactions. Confounders are routinely included in the genome-wide gene × environment interaction analysis as covariates; however, this does not control for any confounding effects on the results if covariate × environment interactions are present. We carried out simulation studies to evaluate the robustness to the covariate × environment confounder for meta-regression and joint meta-analysis, which are two commonly used meta-analysis methods for testing the gene × environment interaction or the genetic main effect and interaction jointly. Here we show that meta-regression is robust to the covariate × environment confounder while joint meta-analysis is subject to the confounding effect with inflated type I error rates. Given vast sample sizes employed in genome-wide gene × environment interaction studies, non-significant covariate × environment interactions at the study level could substantially elevate the type I error rate at the consortium level. When covariate × environment confounders are present, type I errors can be controlled in joint meta-analysis by including the covariate × environment terms in the analysis at the study level. Alternatively, meta-regression can be applied, which is robust to potential covariate × environment confounders. PMID:28362796

  10. Identifying types and causes of errors in mortality data in a clinical registry using multiple information systems.

    PubMed

    Koetsier, Antonie; Peek, Niels; de Keizer, Nicolette

    2012-01-01

    Errors may occur in the registration of in-hospital mortality, making it less reliable as a quality indicator. We assessed the types of errors made in in-hospital mortality registration in the clinical quality registry National Intensive Care Evaluation (NICE) by comparing its mortality data to data from a national insurance claims database. Subsequently, we performed site visits at eleven Intensive Care Units (ICUs) to investigate the number, types and causes of errors made in in-hospital mortality registration. A total of 255 errors were found in the NICE registry. Two different types of software malfunction accounted for almost 80% of the errors. The remaining 20% were five types of manual transcription errors and human failures to record outcome data. Clinical registries should be aware of the possible existence of errors in recorded outcome data and understand their causes. In order to prevent errors, we recommend to thoroughly verify the software that is used in the registration process.

  11. Correlation, necessity, and sufficiency: Common errors in the scientific reasoning of undergraduate students for interpreting experiments.

    PubMed

    Coleman, Aaron B; Lam, Diane P; Soowal, Lara N

    2015-01-01

    Gaining an understanding of how science works is central to an undergraduate education in biology and biochemistry. The reasoning required to design or interpret experiments that ask specific questions does not come naturally, and is an essential part of the science process skills that must be learned for an understanding of how scientists conduct research. Gaps in these reasoning skills make it difficult for students to become proficient in reading primary scientific literature. In this study, we assessed the ability of students in an upper-division biochemistry laboratory class to use the concepts of correlation, necessity, and sufficiency in interpreting experiments presented in a format and context that is similar to what they would encounter when reading a journal article. The students were assessed before and after completion of a laboratory module where necessary vs. sufficient reasoning was used to design and interpret experiments. The assessment identified two types of errors that were commonly committed by students when interpreting experimental data. When presented with an experiment that only establishes a correlation between a potential intermediate and a known effect, students frequently interpreted the intermediate as being sufficient (causative) for the effect. Also, when presented with an experiment that tests only necessity for an intermediate, they frequently made unsupported conclusions about sufficiency, and vice versa. Completion of the laboratory module and instruction in necessary vs. sufficient reasoning showed some promise for addressing these common errors. © 2015 The International Union of Biochemistry and Molecular Biology.

  12. A systematic review of the quality of statistical methods employed for analysing quality of life data in cancer randomised controlled trials.

    PubMed

    Hamel, Jean-Francois; Saulnier, Patrick; Pe, Madeline; Zikos, Efstathios; Musoro, Jammbe; Coens, Corneel; Bottomley, Andrew

    2017-09-01

    Over the last decades, Health-related Quality of Life (HRQoL) end-points have become an important outcome of the randomised controlled trials (RCTs). HRQoL methodology in RCTs has improved following international consensus recommendations. However, no international recommendations exist concerning the statistical analysis of such data. The aim of our study was to identify and characterise the quality of the statistical methods commonly used for analysing HRQoL data in cancer RCTs. Building on our recently published systematic review, we analysed a total of 33 published RCTs studying the HRQoL methods reported in RCTs since 1991. We focussed on the ability of the methods to deal with the three major problems commonly encountered when analysing HRQoL data: their multidimensional and longitudinal structure and the commonly high rate of missing data. All studies reported HRQoL being assessed repeatedly over time for a period ranging from 2 to 36 months. Missing data were common, with compliance rates ranging from 45% to 90%. From the 33 studies considered, 12 different statistical methods were identified. Twenty-nine studies analysed each of the questionnaire sub-dimensions without type I error adjustment. Thirteen studies repeated the HRQoL analysis at each assessment time again without type I error adjustment. Only 8 studies used methods suitable for repeated measurements. Our findings show a lack of consistency in statistical methods for analysing HRQoL data. Problems related to multiple comparisons were rarely considered leading to a high risk of false positive results. It is therefore critical that international recommendations for improving such statistical practices are developed. Copyright © 2017. Published by Elsevier Ltd.

  13. Generalized functional linear models for gene-based case-control association studies.

    PubMed

    Fan, Ruzong; Wang, Yifan; Mills, James L; Carter, Tonia C; Lobach, Iryna; Wilson, Alexander F; Bailey-Wilson, Joan E; Weeks, Daniel E; Xiong, Momiao

    2014-11-01

    By using functional data analysis techniques, we developed generalized functional linear models for testing association between a dichotomous trait and multiple genetic variants in a genetic region while adjusting for covariates. Both fixed and mixed effect models are developed and compared. Extensive simulations show that Rao's efficient score tests of the fixed effect models are very conservative since they generate lower type I errors than nominal levels, and global tests of the mixed effect models generate accurate type I errors. Furthermore, we found that the Rao's efficient score test statistics of the fixed effect models have higher power than the sequence kernel association test (SKAT) and its optimal unified version (SKAT-O) in most cases when the causal variants are both rare and common. When the causal variants are all rare (i.e., minor allele frequencies less than 0.03), the Rao's efficient score test statistics and the global tests have similar or slightly lower power than SKAT and SKAT-O. In practice, it is not known whether rare variants or common variants in a gene region are disease related. All we can assume is that a combination of rare and common variants influences disease susceptibility. Thus, the improved performance of our models when the causal variants are both rare and common shows that the proposed models can be very useful in dissecting complex traits. We compare the performance of our methods with SKAT and SKAT-O on real neural tube defects and Hirschsprung's disease datasets. The Rao's efficient score test statistics and the global tests are more sensitive than SKAT and SKAT-O in the real data analysis. Our methods can be used in either gene-disease genome-wide/exome-wide association studies or candidate gene analyses. © 2014 WILEY PERIODICALS, INC.

  14. A Novel Genome-Information Content-Based Statistic for Genome-Wide Association Analysis Designed for Next-Generation Sequencing Data

    PubMed Central

    Luo, Li; Zhu, Yun

    2012-01-01

    Abstract The genome-wide association studies (GWAS) designed for next-generation sequencing data involve testing association of genomic variants, including common, low frequency, and rare variants. The current strategies for association studies are well developed for identifying association of common variants with the common diseases, but may be ill-suited when large amounts of allelic heterogeneity are present in sequence data. Recently, group tests that analyze their collective frequency differences between cases and controls shift the current variant-by-variant analysis paradigm for GWAS of common variants to the collective test of multiple variants in the association analysis of rare variants. However, group tests ignore differences in genetic effects among SNPs at different genomic locations. As an alternative to group tests, we developed a novel genome-information content-based statistics for testing association of the entire allele frequency spectrum of genomic variation with the diseases. To evaluate the performance of the proposed statistics, we use large-scale simulations based on whole genome low coverage pilot data in the 1000 Genomes Project to calculate the type 1 error rates and power of seven alternative statistics: a genome-information content-based statistic, the generalized T2, collapsing method, multivariate and collapsing (CMC) method, individual χ2 test, weighted-sum statistic, and variable threshold statistic. Finally, we apply the seven statistics to published resequencing dataset from ANGPTL3, ANGPTL4, ANGPTL5, and ANGPTL6 genes in the Dallas Heart Study. We report that the genome-information content-based statistic has significantly improved type 1 error rates and higher power than the other six statistics in both simulated and empirical datasets. PMID:22651812

  15. A novel genome-information content-based statistic for genome-wide association analysis designed for next-generation sequencing data.

    PubMed

    Luo, Li; Zhu, Yun; Xiong, Momiao

    2012-06-01

    The genome-wide association studies (GWAS) designed for next-generation sequencing data involve testing association of genomic variants, including common, low frequency, and rare variants. The current strategies for association studies are well developed for identifying association of common variants with the common diseases, but may be ill-suited when large amounts of allelic heterogeneity are present in sequence data. Recently, group tests that analyze their collective frequency differences between cases and controls shift the current variant-by-variant analysis paradigm for GWAS of common variants to the collective test of multiple variants in the association analysis of rare variants. However, group tests ignore differences in genetic effects among SNPs at different genomic locations. As an alternative to group tests, we developed a novel genome-information content-based statistics for testing association of the entire allele frequency spectrum of genomic variation with the diseases. To evaluate the performance of the proposed statistics, we use large-scale simulations based on whole genome low coverage pilot data in the 1000 Genomes Project to calculate the type 1 error rates and power of seven alternative statistics: a genome-information content-based statistic, the generalized T(2), collapsing method, multivariate and collapsing (CMC) method, individual χ(2) test, weighted-sum statistic, and variable threshold statistic. Finally, we apply the seven statistics to published resequencing dataset from ANGPTL3, ANGPTL4, ANGPTL5, and ANGPTL6 genes in the Dallas Heart Study. We report that the genome-information content-based statistic has significantly improved type 1 error rates and higher power than the other six statistics in both simulated and empirical datasets.

  16. Generalized Functional Linear Models for Gene-based Case-Control Association Studies

    PubMed Central

    Mills, James L.; Carter, Tonia C.; Lobach, Iryna; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Weeks, Daniel E.; Xiong, Momiao

    2014-01-01

    By using functional data analysis techniques, we developed generalized functional linear models for testing association between a dichotomous trait and multiple genetic variants in a genetic region while adjusting for covariates. Both fixed and mixed effect models are developed and compared. Extensive simulations show that Rao's efficient score tests of the fixed effect models are very conservative since they generate lower type I errors than nominal levels, and global tests of the mixed effect models generate accurate type I errors. Furthermore, we found that the Rao's efficient score test statistics of the fixed effect models have higher power than the sequence kernel association test (SKAT) and its optimal unified version (SKAT-O) in most cases when the causal variants are both rare and common. When the causal variants are all rare (i.e., minor allele frequencies less than 0.03), the Rao's efficient score test statistics and the global tests have similar or slightly lower power than SKAT and SKAT-O. In practice, it is not known whether rare variants or common variants in a gene are disease-related. All we can assume is that a combination of rare and common variants influences disease susceptibility. Thus, the improved performance of our models when the causal variants are both rare and common shows that the proposed models can be very useful in dissecting complex traits. We compare the performance of our methods with SKAT and SKAT-O on real neural tube defects and Hirschsprung's disease data sets. The Rao's efficient score test statistics and the global tests are more sensitive than SKAT and SKAT-O in the real data analysis. Our methods can be used in either gene-disease genome-wide/exome-wide association studies or candidate gene analyses. PMID:25203683

  17. Evaluating aggregate effects of rare and common variants in the 1000 Genomes Project exon sequencing data using latent variable structural equation modeling.

    PubMed

    Nock, Nl; Zhang, Lx

    2011-11-29

    Methods that can evaluate aggregate effects of rare and common variants are limited. Therefore, we applied a two-stage approach to evaluate aggregate gene effects in the 1000 Genomes Project data, which contain 24,487 single-nucleotide polymorphisms (SNPs) in 697 unrelated individuals from 7 populations. In stage 1, we identified potentially interesting genes (PIGs) as those having at least one SNP meeting Bonferroni correction using univariate, multiple regression models. In stage 2, we evaluate aggregate PIG effects on trait, Q1, by modeling each gene as a latent construct, which is defined by multiple common and rare variants, using the multivariate statistical framework of structural equation modeling (SEM). In stage 1, we found that PIGs varied markedly between a randomly selected replicate (replicate 137) and 100 other replicates, with the exception of FLT1. In stage 1, collapsing rare variants decreased false positives but increased false negatives. In stage 2, we developed a good-fitting SEM model that included all nine genes simulated to affect Q1 (FLT1, KDR, ARNT, ELAV4, FLT4, HIF1A, HIF3A, VEGFA, VEGFC) and found that FLT1 had the largest effect on Q1 (βstd = 0.33 ± 0.05). Using replicate 137 estimates as population values, we found that the mean relative bias in the parameters (loadings, paths, residuals) and their standard errors across 100 replicates was on average, less than 5%. Our latent variable SEM approach provides a viable framework for modeling aggregate effects of rare and common variants in multiple genes, but more elegant methods are needed in stage 1 to minimize type I and type II error.

  18. Refractive errors.

    PubMed

    Schiefer, Ulrich; Kraus, Christina; Baumbach, Peter; Ungewiß, Judith; Michels, Ralf

    2016-10-14

    All over the world, refractive errors are among the most frequently occuring treatable distur - bances of visual function. Ametropias have a prevalence of nearly 70% among adults in Germany and are thus of great epidemiologic and socio-economic relevance. In the light of their own clinical experience, the authors review pertinent articles retrieved by a selective literature search employing the terms "ametropia, "anisometropia," "refraction," "visual acuity," and epidemiology." In 2011, only 31% of persons over age 16 in Germany did not use any kind of visual aid; 63.4% wore eyeglasses and 5.3% wore contact lenses. Refractive errors were the most common reason for consulting an ophthalmologist, accounting for 21.1% of all outpatient visits. A pinhole aperture (stenopeic slit) is a suitable instrument for the basic diagnostic evaluation of impaired visual function due to optical factors. Spherical refractive errors (myopia and hyperopia), cylindrical refractive errors (astigmatism), unequal refractive errors in the two eyes (anisometropia), and the typical optical disturbance of old age (presbyopia) cause specific functional limitations and can be detected by a physician who does not need to be an ophthalmologist. Simple functional tests can be used in everyday clinical practice to determine quickly, easily, and safely whether the patient is suffering from a benign and easily correctable type of visual impairment, or whether there are other, more serious underlying causes.

  19. Adverse Drug Events and Medication Errors in African Hospitals: A Systematic Review.

    PubMed

    Mekonnen, Alemayehu B; Alhawassi, Tariq M; McLachlan, Andrew J; Brien, Jo-Anne E

    2018-03-01

    Medication errors and adverse drug events are universal problems contributing to patient harm but the magnitude of these problems in Africa remains unclear. The objective of this study was to systematically investigate the literature on the extent of medication errors and adverse drug events, and the factors contributing to medication errors in African hospitals. We searched PubMed, MEDLINE, EMBASE, Web of Science and Global Health databases from inception to 31 August, 2017 and hand searched the reference lists of included studies. Original research studies of any design published in English that investigated adverse drug events and/or medication errors in any patient population in the hospital setting in Africa were included. Descriptive statistics including median and interquartile range were presented. Fifty-one studies were included; of these, 33 focused on medication errors, 15 on adverse drug events, and three studies focused on medication errors and adverse drug events. These studies were conducted in nine (of the 54) African countries. In any patient population, the median (interquartile range) percentage of patients reported to have experienced any suspected adverse drug event at hospital admission was 8.4% (4.5-20.1%), while adverse drug events causing admission were reported in 2.8% (0.7-6.4%) of patients but it was reported that a median of 43.5% (20.0-47.0%) of the adverse drug events were deemed preventable. Similarly, the median mortality rate attributed to adverse drug events was reported to be 0.1% (interquartile range 0.0-0.3%). The most commonly reported types of medication errors were prescribing errors, occurring in a median of 57.4% (interquartile range 22.8-72.8%) of all prescriptions and a median of 15.5% (interquartile range 7.5-50.6%) of the prescriptions evaluated had dosing problems. Major contributing factors for medication errors reported in these studies were individual practitioner factors (e.g. fatigue and inadequate knowledge/training) and environmental factors, such as workplace distraction and high workload. Medication errors in the African healthcare setting are relatively common, and the impact of adverse drug events is substantial but many are preventable. This review supports the design and implementation of preventative strategies targeting the most likely contributing factors.

  20. A systematic review of the extent, nature and likely causes of preventable adverse events arising from hospital care.

    PubMed

    Sari, A Akbari; Doshmangir, L; Sheldon, T

    2010-01-01

    Understanding the nature and causes of medical adverse events may help their prevention. This systematic review explores the types, risk factors, and likely causes of preventable adverse events in the hospital sector. MEDLINE (1970-2008), EMBASE, CINAHL (1970-2005) and the reference lists were used to identify the studies and a structured narrative method used to synthesise the data. Operative adverse events were more common but less preventable and diagnostic adverse events less common but more preventable than other adverse events. Preventable adverse events were often associated with more than one contributory factor. The majority of adverse events were linked to individual human error, and a significant proportion of these caused serious patient harm. Equipment failure was involved in a small proportion of adverse events and rarely caused patient harm. The proportion of system failures varied widely ranging from 3% to 85% depending on the data collection and classification methods used. Operative adverse events are more common but less preventable than diagnostic adverse events. Adverse events are usually associated with more than one contributory factor, the majority are linked to individual human error, and a proportion of these with system failure.

  1. Medical error identification, disclosure, and reporting: do emergency medicine provider groups differ?

    PubMed

    Hobgood, Cherri; Weiner, Bryan; Tamayo-Sarver, Joshua H

    2006-04-01

    To determine if the three types of emergency medicine providers--physicians, nurses, and out-of-hospital providers (emergency medical technicians [EMTs])--differ in their identification, disclosure, and reporting of medical error. A convenience sample of providers in an academic emergency department evaluated ten case vignettes that represented two error types (medication and cognitive) and three severity levels. For each vignette, providers were asked the following: 1) Is this an error? 2) Would you tell the patient? 3) Would you report this to a hospital committee? To assess differences in identification, disclosure, and reporting by provider type, error type, and error severity, the authors constructed three-way tables with the nonparametric Somers' D clustered on participant. To assess the contribution of disclosure instruction and environmental variables, fixed-effects regression stratified by provider type was used. Of the 116 providers who were eligible, 103 (40 physicians, 26 nurses, and 35 EMTs) had complete data. Physicians were more likely to classify an event as an error (78%) than nurses (71%; p = 0.04) or EMTs (68%; p < 0.01). Nurses were less likely to disclose an error to the patient (59%) than physicians (71%; p = 0.04). Physicians were the least likely to report the error (54%) compared with nurses (68%; p = 0.02) or EMTs (78%; p < 0.01). For all provider and error types, identification, disclosure, and reporting increased with increasing severity. Improving patient safety hinges on the ability of health care providers to accurately identify, disclose, and report medical errors. Interventions must account for differences in error identification, disclosure, and reporting by provider type.

  2. Characterizing the impact of model error in hydrologic time series recovery inverse problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, Scott K.; He, Jiachuan; Vesselinov, Velimir V.

    Hydrologic models are commonly over-smoothed relative to reality, owing to computational limitations and to the difficulty of obtaining accurate high-resolution information. When used in an inversion context, such models may introduce systematic biases which cannot be encapsulated by an unbiased “observation noise” term of the type assumed by standard regularization theory and typical Bayesian formulations. Despite its importance, model error is difficult to encapsulate systematically and is often neglected. In this paper, model error is considered for an important class of inverse problems that includes interpretation of hydraulic transients and contaminant source history inference: reconstruction of a time series thatmore » has been convolved against a transfer function (i.e., impulse response) that is only approximately known. Using established harmonic theory along with two results established here regarding triangular Toeplitz matrices, upper and lower error bounds are derived for the effect of systematic model error on time series recovery for both well-determined and over-determined inverse problems. It is seen that use of additional measurement locations does not improve expected performance in the face of model error. A Monte Carlo study of a realistic hydraulic reconstruction problem is presented, and the lower error bound is seen informative about expected behavior. Finally, a possible diagnostic criterion for blind transfer function characterization is also uncovered.« less

  3. Denoising DNA deep sequencing data—high-throughput sequencing errors and their correction

    PubMed Central

    Laehnemann, David; Borkhardt, Arndt

    2016-01-01

    Characterizing the errors generated by common high-throughput sequencing platforms and telling true genetic variation from technical artefacts are two interdependent steps, essential to many analyses such as single nucleotide variant calling, haplotype inference, sequence assembly and evolutionary studies. Both random and systematic errors can show a specific occurrence profile for each of the six prominent sequencing platforms surveyed here: 454 pyrosequencing, Complete Genomics DNA nanoball sequencing, Illumina sequencing by synthesis, Ion Torrent semiconductor sequencing, Pacific Biosciences single-molecule real-time sequencing and Oxford Nanopore sequencing. There is a large variety of programs available for error removal in sequencing read data, which differ in the error models and statistical techniques they use, the features of the data they analyse, the parameters they determine from them and the data structures and algorithms they use. We highlight the assumptions they make and for which data types these hold, providing guidance which tools to consider for benchmarking with regard to the data properties. While no benchmarking results are included here, such specific benchmarks would greatly inform tool choices and future software development. The development of stand-alone error correctors, as well as single nucleotide variant and haplotype callers, could also benefit from using more of the knowledge about error profiles and from (re)combining ideas from the existing approaches presented here. PMID:26026159

  4. Characterizing the impact of model error in hydrologic time series recovery inverse problems

    DOE PAGES

    Hansen, Scott K.; He, Jiachuan; Vesselinov, Velimir V.

    2017-10-28

    Hydrologic models are commonly over-smoothed relative to reality, owing to computational limitations and to the difficulty of obtaining accurate high-resolution information. When used in an inversion context, such models may introduce systematic biases which cannot be encapsulated by an unbiased “observation noise” term of the type assumed by standard regularization theory and typical Bayesian formulations. Despite its importance, model error is difficult to encapsulate systematically and is often neglected. In this paper, model error is considered for an important class of inverse problems that includes interpretation of hydraulic transients and contaminant source history inference: reconstruction of a time series thatmore » has been convolved against a transfer function (i.e., impulse response) that is only approximately known. Using established harmonic theory along with two results established here regarding triangular Toeplitz matrices, upper and lower error bounds are derived for the effect of systematic model error on time series recovery for both well-determined and over-determined inverse problems. It is seen that use of additional measurement locations does not improve expected performance in the face of model error. A Monte Carlo study of a realistic hydraulic reconstruction problem is presented, and the lower error bound is seen informative about expected behavior. Finally, a possible diagnostic criterion for blind transfer function characterization is also uncovered.« less

  5. The effectiveness of risk management program on pediatric nurses' medication error.

    PubMed

    Dehghan-Nayeri, Nahid; Bayat, Fariba; Salehi, Tahmineh; Faghihzadeh, Soghrat

    2013-09-01

    Medication therapy is one of the most complex and high-risk clinical processes that nurses deal with. Medication error is the most common type of error that brings about damage and death to patients, especially pediatric ones. However, these errors are preventable. Identifying and preventing undesirable events leading to medication errors are the main risk management activities. The aim of this study was to investigate the effectiveness of a risk management program on the pediatric nurses' medication error rate. This study is a quasi-experimental one with a comparison group. In this study, 200 nurses were recruited from two main pediatric hospitals in Tehran. In the experimental hospital, we applied the risk management program for a period of 6 months. Nurses of the control hospital did the hospital routine schedule. A pre- and post-test was performed to measure the frequency of the medication error events. SPSS software, t-test, and regression analysis were used for data analysis. After the intervention, the medication error rate of nurses at the experimental hospital was significantly lower (P < 0.001) and the error-reporting rate was higher (P < 0.007) compared to before the intervention and also in comparison to the nurses of the control hospital. Based on the results of this study and taking into account the high-risk nature of the medical environment, applying the quality-control programs such as risk management can effectively prevent the occurrence of the hospital undesirable events. Nursing mangers can reduce the medication error rate by applying risk management programs. However, this program cannot succeed without nurses' cooperation.

  6. Association of medication errors with drug classifications, clinical units, and consequence of errors: Are they related?

    PubMed

    Muroi, Maki; Shen, Jay J; Angosta, Alona

    2017-02-01

    Registered nurses (RNs) play an important role in safe medication administration and patient safety. This study examined a total of 1276 medication error (ME) incident reports made by RNs in hospital inpatient settings in the southwestern region of the United States. The most common drug class associated with MEs was cardiovascular drugs (24.7%). Among this class, anticoagulants had the most errors (11.3%). The antimicrobials was the second most common drug class associated with errors (19.1%) and vancomycin was the most common antimicrobial that caused errors in this category (6.1%). MEs occurred more frequently in the medical-surgical and intensive care units than any other hospital units. Ten percent of MEs reached the patients with harm and 11% reached the patients with increased monitoring. Understanding the contributing factors related to MEs, addressing and eliminating risk of errors across hospital units, and providing education and resources for nurses may help reduce MEs. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Imagery of Errors in Typing

    ERIC Educational Resources Information Center

    Rieger, Martina; Martinez, Fanny; Wenke, Dorit

    2011-01-01

    Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…

  8. Classification and reduction of pilot error

    NASA Technical Reports Server (NTRS)

    Rogers, W. H.; Logan, A. L.; Boley, G. D.

    1989-01-01

    Human error is a primary or contributing factor in about two-thirds of commercial aviation accidents worldwide. With the ultimate goal of reducing pilot error accidents, this contract effort is aimed at understanding the factors underlying error events and reducing the probability of certain types of errors by modifying underlying factors such as flight deck design and procedures. A review of the literature relevant to error classification was conducted. Classification includes categorizing types of errors, the information processing mechanisms and factors underlying them, and identifying factor-mechanism-error relationships. The classification scheme developed by Jens Rasmussen was adopted because it provided a comprehensive yet basic error classification shell or structure that could easily accommodate addition of details on domain-specific factors. For these purposes, factors specific to the aviation environment were incorporated. Hypotheses concerning the relationship of a small number of underlying factors, information processing mechanisms, and error types types identified in the classification scheme were formulated. ASRS data were reviewed and a simulation experiment was performed to evaluate and quantify the hypotheses.

  9. Toward Joint Hypothesis-Tests Seismic Event Screening Analysis: Ms|mb and Event Depth

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Dale; Selby, Neil

    2012-08-14

    Well established theory can be used to combine single-phenomenology hypothesis tests into a multi-phenomenology event screening hypothesis test (Fisher's and Tippett's tests). Commonly used standard error in Ms:mb event screening hypothesis test is not fully consistent with physical basis. Improved standard error - Better agreement with physical basis, and correctly partitions error to include Model Error as a component of variance, correctly reduces station noise variance through network averaging. For 2009 DPRK test - Commonly used standard error 'rejects' H0 even with better scaling slope ({beta} = 1, Selby et al.), improved standard error 'fails to rejects' H0.

  10. An experimental system for the study of active vibration control - Development and modeling

    NASA Astrophysics Data System (ADS)

    Batta, George R.; Chen, Anning

    A modular rotational vibration system designed to facilitate the study of active control of vibrating systems is discussed. The model error associated with four common types of identification problems has been studied. The general multiplicative uncertainty shape for a vibration system is small in low frequencies, large at high frequencies. The frequency-domain error function has sharp peaks near the frequency of each mode. The inability to identify a high-frequency mode causes an increase of uncertainties at all frequencies. Missing a low-frequency mode causes the uncertainties to be much larger at all frequencies than missing a high-frequency mode. Hysteresis causes a small increase of uncertainty at low frequencies, but its overall effect is relatively small.

  11. False biochemical diagnosis of hyperthyroidism in streptavidin-biotin-based immunoassays: the problem of biotin intake and related interferences.

    PubMed

    Piketty, Marie-Liesse; Polak, Michel; Flechtner, Isabelle; Gonzales-Briceño, Laura; Souberbielle, Jean-Claude

    2017-05-01

    Immunoassays are now commonly used for hormone measurement, in high throughput analytical platforms. Immunoassays are generally robust to interference. However, endogenous analytical error may occur in some patients; this may be encountered in biotin supplementation or in the presence of anti-streptavidin antibody, in immunoassays involving streptavidin-biotin interaction. In these cases, the interference may induce both false positive and false negative results, and simulate a seemingly coherent hormonal profile. It is to be feared that this type of errors will be more frequently observed. This review underlines the importance of keeping close interactions between biologists and clinicians to be able to correlate the hormonal assay results with the clinical picture.

  12. WASP (Write a Scientific Paper) using Excel - 8: t-Tests.

    PubMed

    Grech, Victor

    2018-06-01

    t-Testing is a common component of inferential statistics when comparing two means. This paper explains the central limit theorem and the concept of the null hypothesis as well as types of errors. On the practical side, this paper outlines how different t-tests may be performed in Microsoft Excel, for different purposes, both statically as well as dynamically, with Excel's functions. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Towards more reliable automated multi-dose dispensing: retrospective follow-up study on medication dose errors and product defects.

    PubMed

    Palttala, Iida; Heinämäki, Jyrki; Honkanen, Outi; Suominen, Risto; Antikainen, Osmo; Hirvonen, Jouni; Yliruusi, Jouko

    2013-03-01

    To date, little is known on applicability of different types of pharmaceutical dosage forms in an automated high-speed multi-dose dispensing process. The purpose of the present study was to identify and further investigate various process-induced and/or product-related limitations associated with multi-dose dispensing process. The rates of product defects and dose dispensing errors in automated multi-dose dispensing were retrospectively investigated during a 6-months follow-up period. The study was based on the analysis of process data of totally nine automated high-speed multi-dose dispensing systems. Special attention was paid to the dependence of multi-dose dispensing errors/product defects and pharmaceutical tablet properties (such as shape, dimensions, weight, scored lines, coatings, etc.) to profile the most suitable forms of tablets for automated dose dispensing systems. The relationship between the risk of errors in dose dispensing and tablet characteristics were visualized by creating a principal component analysis (PCA) model for the outcome of dispensed tablets. The two most common process-induced failures identified in the multi-dose dispensing are predisposal of tablet defects and unexpected product transitions in the medication cassette (dose dispensing error). The tablet defects are product-dependent failures, while the tablet transitions are dependent on automated multi-dose dispensing systems used. The occurrence of tablet defects is approximately twice as common as tablet transitions. Optimal tablet preparation for the high-speed multi-dose dispensing would be a round-shaped, relatively small/middle-sized, film-coated tablet without any scored line. Commercial tablet products can be profiled and classified based on their suitability to a high-speed multi-dose dispensing process.

  14. Aircraft Power-Plant Instruments

    NASA Technical Reports Server (NTRS)

    Sontag, Harcourt; Brombacher, W G

    1934-01-01

    This report supersedes NACA-TR-129 which is now obsolete. Aircraft power-plant instruments include tachometers, engine thermometers, pressure gages, fuel-quantity gages, fuel flow meters and indicators, and manifold pressure gages. The report includes a description of the commonly used types and some others, the underlying principle utilized in the design, and some design data. The inherent errors of the instrument, the methods of making laboratory tests, descriptions of the test apparatus, and data in considerable detail in the performance of commonly used instruments are presented. Standard instruments and, in cases where it appears to be of interest, those used as secondary standards are described. A bibliography of important articles is included.

  15. Comparing models of change to estimate the mediated effect in the pretest-posttest control group design

    PubMed Central

    Valente, Matthew J.; MacKinnon, David P.

    2017-01-01

    Models to assess mediation in the pretest-posttest control group design are understudied in the behavioral sciences even though it is the design of choice for evaluating experimental manipulations. The paper provides analytical comparisons of the four most commonly used models used to estimate the mediated effect in this design: Analysis of Covariance (ANCOVA), difference score, residualized change score, and cross-sectional model. Each of these models are fitted using a Latent Change Score specification and a simulation study assessed bias, Type I error, power, and confidence interval coverage of the four models. All but the ANCOVA model make stringent assumptions about the stability and cross-lagged relations of the mediator and outcome that may not be plausible in real-world applications. When these assumptions do not hold, Type I error and statistical power results suggest that only the ANCOVA model has good performance. The four models are applied to an empirical example. PMID:28845097

  16. Comparing models of change to estimate the mediated effect in the pretest-posttest control group design.

    PubMed

    Valente, Matthew J; MacKinnon, David P

    2017-01-01

    Models to assess mediation in the pretest-posttest control group design are understudied in the behavioral sciences even though it is the design of choice for evaluating experimental manipulations. The paper provides analytical comparisons of the four most commonly used models used to estimate the mediated effect in this design: Analysis of Covariance (ANCOVA), difference score, residualized change score, and cross-sectional model. Each of these models are fitted using a Latent Change Score specification and a simulation study assessed bias, Type I error, power, and confidence interval coverage of the four models. All but the ANCOVA model make stringent assumptions about the stability and cross-lagged relations of the mediator and outcome that may not be plausible in real-world applications. When these assumptions do not hold, Type I error and statistical power results suggest that only the ANCOVA model has good performance. The four models are applied to an empirical example.

  17. DNA polymerase η mutational signatures are found in a variety of different types of cancer.

    PubMed

    Rogozin, Igor B; Goncearenco, Alexander; Lada, Artem G; De, Subhajyoti; Yurchenko, Vyacheslav; Nudelman, German; Panchenko, Anna R; Cooper, David N; Pavlov, Youri I

    2018-01-01

    DNA polymerase (pol) η is a specialized error-prone polymerase with at least two quite different and contrasting cellular roles: to mitigate the genetic consequences of solar UV irradiation, and promote somatic hypermutation in the variable regions of immunoglobulin genes. Misregulation and mistargeting of pol η can compromise genome integrity. We explored whether the mutational signature of pol η could be found in datasets of human somatic mutations derived from normal and cancer cells. A substantial excess of single and tandem somatic mutations within known pol η mutable motifs was noted in skin cancer as well as in many other types of human cancer, suggesting that somatic mutations in A:T bases generated by DNA polymerase η are a common feature of tumorigenesis. Another peculiarity of pol ηmutational signatures, mutations in YCG motifs, led us to speculate that error-prone DNA synthesis opposite methylated CpG dinucleotides by misregulated pol η in tumors might constitute an additional mechanism of cytosine demethylation in this hypermutable dinucleotide.

  18. Profile of refractive errors in cerebral palsy: impact of severity of motor impairment (GMFCS) and CP subtype on refractive outcome.

    PubMed

    Saunders, Kathryn J; Little, Julie-Anne; McClelland, Julie F; Jackson, A Jonathan

    2010-06-01

    To describe refractive status in children and young adults with cerebral palsy (CP) and relate refractive error to standardized measures of type and severity of CP impairment and to ocular dimensions. A population-based sample of 118 participants aged 4 to 23 years with CP (mean 11.64 +/- 4.06) and an age-appropriate control group (n = 128; age, 4-16 years; mean, 9.33 +/- 3.52) were recruited. Motor impairment was described with the Gross Motor Function Classification Scale (GMFCS), and subtype was allocated with the Surveillance of Cerebral Palsy in Europe (SCPE). Measures of refractive error were obtained from all participants and ocular biometry from a subgroup with CP. A significantly higher prevalence and magnitude of refractive error was found in the CP group compared to the control group. Axial length and spherical refractive error were strongly related. This relation did not improve with inclusion of corneal data. There was no relation between the presence or magnitude of spherical refractive errors in CP and the level of motor impairment, intellectual impairment, or the presence of communication difficulties. Higher spherical refractive errors were significantly associated with the nonspastic CP subtype. The presence and magnitude of astigmatism were greater when intellectual impairment was more severe, and astigmatic errors were explained by corneal dimensions. Conclusions. High refractive errors are common in CP, pointing to impairment of the emmetropization process. Biometric data support this In contrast to other functional vision measures, spherical refractive error is unrelated to CP severity, but those with nonspastic CP tend to demonstrate the most extreme errors in refraction.

  19. Model Error Budgets

    NASA Technical Reports Server (NTRS)

    Briggs, Hugh C.

    2008-01-01

    An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.

  20. Type I error rates of rare single nucleotide variants are inflated in tests of association with non-normally distributed traits using simple linear regression methods.

    PubMed

    Schwantes-An, Tae-Hwi; Sung, Heejong; Sabourin, Jeremy A; Justice, Cristina M; Sorant, Alexa J M; Wilson, Alexander F

    2016-01-01

    In this study, the effects of (a) the minor allele frequency of the single nucleotide variant (SNV), (b) the degree of departure from normality of the trait, and (c) the position of the SNVs on type I error rates were investigated in the Genetic Analysis Workshop (GAW) 19 whole exome sequence data. To test the distribution of the type I error rate, 5 simulated traits were considered: standard normal and gamma distributed traits; 2 transformed versions of the gamma trait (log 10 and rank-based inverse normal transformations); and trait Q1 provided by GAW 19. Each trait was tested with 313,340 SNVs. Tests of association were performed with simple linear regression and average type I error rates were determined for minor allele frequency classes. Rare SNVs (minor allele frequency < 0.05) showed inflated type I error rates for non-normally distributed traits that increased as the minor allele frequency decreased. The inflation of average type I error rates increased as the significance threshold decreased. Normally distributed traits did not show inflated type I error rates with respect to the minor allele frequency for rare SNVs. There was no consistent effect of transformation on the uniformity of the distribution of the location of SNVs with a type I error.

  1. Patterns of technical error among surgical malpractice claims: an analysis of strategies to prevent injury to surgical patients.

    PubMed

    Regenbogen, Scott E; Greenberg, Caprice C; Studdert, David M; Lipsitz, Stuart R; Zinner, Michael J; Gawande, Atul A

    2007-11-01

    To identify the most prevalent patterns of technical errors in surgery, and evaluate commonly recommended interventions in light of these patterns. The majority of surgical adverse events involve technical errors, but little is known about the nature and causes of these events. We examined characteristics of technical errors and common contributing factors among closed surgical malpractice claims. Surgeon reviewers analyzed 444 randomly sampled surgical malpractice claims from four liability insurers. Among 258 claims in which injuries due to error were detected, 52% (n = 133) involved technical errors. These technical errors were further analyzed with a structured review instrument designed by qualitative content analysis. Forty-nine percent of the technical errors caused permanent disability; an additional 16% resulted in death. Two-thirds (65%) of the technical errors were linked to manual error, 9% to errors in judgment, and 26% to both manual and judgment error. A minority of technical errors involved advanced procedures requiring special training ("index operations"; 16%), surgeons inexperienced with the task (14%), or poorly supervised residents (9%). The majority involved experienced surgeons (73%), and occurred in routine, rather than index, operations (84%). Patient-related complexities-including emergencies, difficult or unexpected anatomy, and previous surgery-contributed to 61% of technical errors, and technology or systems failures contributed to 21%. Most technical errors occur in routine operations with experienced surgeons, under conditions of increased patient complexity or systems failure. Commonly recommended interventions, including restricting high-complexity operations to experienced surgeons, additional training for inexperienced surgeons, and stricter supervision of trainees, are likely to address only a minority of technical errors. Surgical safety research should instead focus on improving decision-making and performance in routine operations for complex patients and circumstances.

  2. SBL-Online: Implementing Studio-Based Learning Techniques in an Online Introductory Programming Course to Address Common Programming Errors and Misconceptions

    ERIC Educational Resources Information Center

    Polo, Blanca J.

    2013-01-01

    Much research has been done in regards to student programming errors, online education and studio-based learning (SBL) in computer science education. This study furthers this area by bringing together this knowledge and applying it to proactively help students overcome impasses caused by common student programming errors. This project proposes a…

  3. The Relation Between Inflation in Type-I and Type-II Error Rate and Population Divergence in Genome-Wide Association Analysis of Multi-Ethnic Populations.

    PubMed

    Derks, E M; Zwinderman, A H; Gamazon, E R

    2017-05-01

    Population divergence impacts the degree of population stratification in Genome Wide Association Studies. We aim to: (i) investigate type-I error rate as a function of population divergence (F ST ) in multi-ethnic (admixed) populations; (ii) evaluate the statistical power and effect size estimates; and (iii) investigate the impact of population stratification on the results of gene-based analyses. Quantitative phenotypes were simulated. Type-I error rate was investigated for Single Nucleotide Polymorphisms (SNPs) with varying levels of F ST between the ancestral European and African populations. Type-II error rate was investigated for a SNP characterized by a high value of F ST . In all tests, genomic MDS components were included to correct for population stratification. Type-I and type-II error rate was adequately controlled in a population that included two distinct ethnic populations but not in admixed samples. Statistical power was reduced in the admixed samples. Gene-based tests showed no residual inflation in type-I error rate.

  4. Exploring Common Misconceptions and Errors about Fractions among College Students in Saudi Arabia

    ERIC Educational Resources Information Center

    Alghazo, Yazan M.; Alghazo, Runna

    2017-01-01

    The purpose of this study was to investigate what common errors and misconceptions about fractions exist among Saudi Arabian college students. Moreover, the study aimed at investigating the possible explanations for the existence of such misconceptions among students. A researcher developed mathematical test aimed at identifying common errors…

  5. A closer look at the effect of preliminary goodness-of-fit testing for normality for the one-sample t-test.

    PubMed

    Rochon, Justine; Kieser, Meinhard

    2011-11-01

    Student's one-sample t-test is a commonly used method when inference about the population mean is made. As advocated in textbooks and articles, the assumption of normality is often checked by a preliminary goodness-of-fit (GOF) test. In a paper recently published by Schucany and Ng it was shown that, for the uniform distribution, screening of samples by a pretest for normality leads to a more conservative conditional Type I error rate than application of the one-sample t-test without preliminary GOF test. In contrast, for the exponential distribution, the conditional level is even more elevated than the Type I error rate of the t-test without pretest. We examine the reasons behind these characteristics. In a simulation study, samples drawn from the exponential, lognormal, uniform, Student's t-distribution with 2 degrees of freedom (t(2) ) and the standard normal distribution that had passed normality screening, as well as the ingredients of the test statistics calculated from these samples, are investigated. For non-normal distributions, we found that preliminary testing for normality may change the distribution of means and standard deviations of the selected samples as well as the correlation between them (if the underlying distribution is non-symmetric), thus leading to altered distributions of the resulting test statistics. It is shown that for skewed distributions the excess in Type I error rate may be even more pronounced when testing one-sided hypotheses. ©2010 The British Psychological Society.

  6. UNDERSTANDING OR NURSES' REACTIONS TO ERRORS AND USING THIS UNDERSTANDING TO IMPROVE PATIENT SAFETY.

    PubMed

    Taifoori, Ladan; Valiee, Sina

    2015-09-01

    The operating room can be home to many different types of nursing errors due to the invasiveness of OR procedures. The nurses' reactions towards errors can be a key factor in patient safety. This article is based on a study, with the aim of investigating nurses' reactions toward nursing errors and the various contributing and resulting factors, conducted at Kurdistan University of Medical Sciences in Sanandaj, Iran in 2014. The goal of the study was to determine how OR nurses' reacted to nursing errors with the goal of having this information used to improve patient safety. Research was conducted as a cross-sectional descriptive study. The participants were all nurses employed in the operating rooms of the teaching hospitals of Kurdistan University of Medical Sciences, which was selected by a consensus method (170 persons). The information was gathered through questionnaires that focused on demographic information, error definition, reasons for error occurrence, and emotional reactions for error occurrence, and emotional reactions toward the errors. 153 questionnaires were completed and analyzed by SPSS software version 16.0. "Not following sterile technique" (82.4 percent) was the most reported nursing error, "tiredness" (92.8 percent) was the most reported reason for the error occurrence, "being upset at having harmed the patient" (85.6 percent) was the most reported emotional reaction after error occurrence", with "decision making for a better approach to tasks the next time" (97.7 percent) as the most common goal and "paying more attention to details" (98 percent) was the most reported planned strategy for future improved outcomes. While healthcare facilities are focused on planning for the prevention and elimination of errors it was shown that nurses can also benefit from support after error occurrence. Their reactions, and coping strategies, need guidance and, with both individual and organizational support, can be a factor in improving patient safety.

  7. Exploring the Phenotype of Phonological Reading Disability as a Function of the Phonological Deficit Severity: Evidence from the Error Analysis Paradigm in Arabic

    ERIC Educational Resources Information Center

    Taha, Haitham; Ibrahim, Raphiq; Khateb, Asaid

    2014-01-01

    The dominant error types were investigated as a function of phonological processing (PP) deficit severity in four groups of impaired readers. For this aim, an error analysis paradigm distinguishing between four error types was used. The findings revealed that the different types of impaired readers were characterized by differing predominant error…

  8. The underreporting of medication errors: A retrospective and comparative root cause analysis in an acute mental health unit over a 3-year period.

    PubMed

    Morrison, Maeve; Cope, Vicki; Murray, Melanie

    2018-05-15

    Medication errors remain a commonly reported clinical incident in health care as highlighted by the World Health Organization's focus to reduce medication-related harm. This retrospective quantitative analysis examined medication errors reported by staff using an electronic Clinical Incident Management System (CIMS) during a 3-year period from April 2014 to April 2017 at a metropolitan mental health ward in Western Australia. The aim of the project was to identify types of medication errors and the context in which they occur and to consider recourse so that medication errors can be reduced. Data were retrieved from the Clinical Incident Management System database and concerned medication incidents from categorized tiers within the system. Areas requiring improvement were identified, and the quality of the documented data captured in the database was reviewed for themes pertaining to medication errors. Content analysis provided insight into the following issues: (i) frequency of problem, (ii) when the problem was detected, and (iii) characteristics of the error (classification of drug/s, where the error occurred, what time the error occurred, what day of the week it occurred, and patient outcome). Data were compared to the state-wide results published in the Your Safety in Our Hands (2016) report. Results indicated several areas upon which quality improvement activities could be focused. These include the following: structural changes; changes to policy and practice; changes to individual responsibilities; improving workplace culture to counteract underreporting of medication errors; and improvement in safety and quality administration of medications within a mental health setting. © 2018 Australian College of Mental Health Nurses Inc.

  9. Error analysis of mathematical problems on TIMSS: A case of Indonesian secondary students

    NASA Astrophysics Data System (ADS)

    Priyani, H. A.; Ekawati, R.

    2018-01-01

    Indonesian students’ competence in solving mathematical problems is still considered as weak. It was pointed out by the results of international assessment such as TIMSS. This might be caused by various types of errors made. Hence, this study aimed at identifying students’ errors in solving mathematical problems in TIMSS in the topic of numbers that considered as the fundamental concept in Mathematics. This study applied descriptive qualitative analysis. The subject was three students with most errors in the test indicators who were taken from 34 students of 8th graders. Data was obtained through paper and pencil test and student’s’ interview. The error analysis indicated that in solving Applying level problem, the type of error that students made was operational errors. In addition, for reasoning level problem, there are three types of errors made such as conceptual errors, operational errors and principal errors. Meanwhile, analysis of the causes of students’ errors showed that students did not comprehend the mathematical problems given.

  10. Architecture-Led Safety Analysis of the Joint Multi-Role (JMR) Joint Common Architecture (JCA) Demonstration System

    DTIC Science & Technology

    2015-12-01

    relevant system components (i.e., their component type declarations) have been anno - tated with EMV2 error source or propagation declarations and hazard...contributors. They are recorded as EMV2 anno - tations for each of the ASSA. Figure 40 shows a sampling of potential hazard contributors by the functional...2012] Leveson, N., Engineering a Safer World. MIT Press. 2012. [Parnas 1991] Parnas, D. & Madey, J . Functional Documentation for Computer Systems

  11. Conceptual versus Algorithmic Learning in High School Chemistry: The Case of Basic Quantum Chemical Concepts--Part 2. Students' Common Errors, Misconceptions and Difficulties in Understanding

    ERIC Educational Resources Information Center

    Papaphotis, Georgios; Tsaparlis, Georgios

    2008-01-01

    Part 2 of the findings are presented of a quantitative study (n = 125) on basic quantum chemical concepts taught at twelfth grade (age 17-18 years) in Greece. A paper-and-pencil test of fourteen questions was used that were of two kinds: five questions that tested recall of knowledge or application of algorithmic procedures (type-A questions);…

  12. What does "Diversity" Mean for Public Engagement in Science? A New Metric for Innovation Ecosystem Diversity.

    PubMed

    Özdemir, Vural; Springer, Simon

    2018-03-01

    Diversity is increasingly at stake in early 21st century. Diversity is often conceptualized across ethnicity, gender, socioeconomic status, sexual preference, and professional credentials, among other categories of difference. These are important and relevant considerations and yet, they are incomplete. Diversity also rests in the way we frame questions long before answers are sought. Such diversity in the framing (epistemology) of scientific and societal questions is important for they influence the types of data, results, and impacts produced by research. Errors in the framing of a research question, whether in technical science or social science, are known as type III errors, as opposed to the better known type I (false positives) and type II errors (false negatives). Kimball defined "error of the third kind" as giving the right answer to the wrong problem. Raiffa described the type III error as correctly solving the wrong problem. Type III errors are upstream or design flaws, often driven by unchecked human values and power, and can adversely impact an entire innovation ecosystem, waste money, time, careers, and precious resources by focusing on the wrong or incorrectly framed question and hypothesis. Decades may pass while technology experts, scientists, social scientists, funding agencies and management consultants continue to tackle questions that suffer from type III errors. We propose a new diversity metric, the Frame Diversity Index (FDI), based on the hitherto neglected diversities in knowledge framing. The FDI would be positively correlated with epistemological diversity and technological democracy, and inversely correlated with prevalence of type III errors in innovation ecosystems, consortia, and knowledge networks. We suggest that the FDI can usefully measure (and prevent) type III error risks in innovation ecosystems, and help broaden the concepts and practices of diversity and inclusion in science, technology, innovation and society.

  13. Detecting Lung and Colorectal Cancer Recurrence Using Structured Clinical/Administrative Data to Enable Outcomes Research and Population Health Management.

    PubMed

    Hassett, Michael J; Uno, Hajime; Cronin, Angel M; Carroll, Nikki M; Hornbrook, Mark C; Ritzwoller, Debra

    2017-12-01

    Recurrent cancer is common, costly, and lethal, yet we know little about it in community-based populations. Electronic health records and tumor registries contain vast amounts of data regarding community-based patients, but usually lack recurrence status. Existing algorithms that use structured data to detect recurrence have limitations. We developed algorithms to detect the presence and timing of recurrence after definitive therapy for stages I-III lung and colorectal cancer using 2 data sources that contain a widely available type of structured data (claims or electronic health record encounters) linked to gold-standard recurrence status: Medicare claims linked to the Cancer Care Outcomes Research and Surveillance study, and the Cancer Research Network Virtual Data Warehouse linked to registry data. Twelve potential indicators of recurrence were used to develop separate models for each cancer in each data source. Detection models maximized area under the ROC curve (AUC); timing models minimized average absolute error. Algorithms were compared by cancer type/data source, and contrasted with an existing binary detection rule. Detection model AUCs (>0.92) exceeded existing prediction rules. Timing models yielded absolute prediction errors that were small relative to follow-up time (<15%). Similar covariates were included in all detection and timing algorithms, though differences by cancer type and dataset challenged efforts to create 1 common algorithm for all scenarios. Valid and reliable detection of recurrence using big data is feasible. These tools will enable extensive, novel research on quality, effectiveness, and outcomes for lung and colorectal cancer patients and those who develop recurrence.

  14. Indications for and outcomes of tertiary referrals in refractive surgery.

    PubMed

    Patryn, Eliza K; Vrijman, Violette; Nieuwendaal, Carla P; van der Meulen, Ivanka J E; Mourits, Maarten P; Lapid-Gortzak, Ruth

    2014-01-01

    To review the spectrum of disease, symptomatology, and management offered to patients referred for a second opinion after refractive surgery. A prospective cohort study was done on all patients referred from October 1, 2006, to September 30, 2011, to a tertiary eye clinic after refractive surgery of any kind (ie, corneal laser surgery, conductive keratoplasty, radial keratotomy, phakic implants, refractive lens exchanges, or any combination thereof). Data analysis was performed on all demographic and clinical aspects of this cohort, including the initial complaint, type of referral, number of complaints, procedure previously performed, diagnosis at our center, type of advice given, and rate and type of surgical intervention. One hundred thirty-one eyes (69 patients) were included. Corneal refractive surgery was performed in 82% (108 eyes), and 11% (14 eyes) were seen after phakic intraocular lens (PIOL) implantation and 7% (9 eyes) after refractive lens exchange. The most common diagnoses were tear film dysfunction (30 eyes, 23%), residual refractive error (25 eyes, 19%), and cataract (20 eyes, 15%). Most patients (42 patients, 61%) were treated conservatively. In 27 patients (39%), 36 eyes (28%) were managed surgically. Severe visual loss was seen in 1 eye. No major problems were found in most second opinions after refractive surgery referral. Dry eyes, small residual refractive error, or higher-order aberrations were the most common complaints. Surgical intervention was needed in 36 eyes (28%), almost half of which were cataract extractions. Severe visual loss was seen in 1 eye with a PIOL. There was no incidence of severe visual loss in keratorefractive and refractive lens exchange procedures. Copyright 2014, SLACK Incorporated.

  15. ON MODEL SELECTION STRATEGIES TO IDENTIFY GENES UNDERLYING BINARY TRAITS USING GENOME-WIDE ASSOCIATION DATA.

    PubMed

    Wu, Zheyang; Zhao, Hongyu

    2012-01-01

    For more fruitful discoveries of genetic variants associated with diseases in genome-wide association studies, it is important to know whether joint analysis of multiple markers is more powerful than the commonly used single-marker analysis, especially in the presence of gene-gene interactions. This article provides a statistical framework to rigorously address this question through analytical power calculations for common model search strategies to detect binary trait loci: marginal search, exhaustive search, forward search, and two-stage screening search. Our approach incorporates linkage disequilibrium, random genotypes, and correlations among score test statistics of logistic regressions. We derive analytical results under two power definitions: the power of finding all the associated markers and the power of finding at least one associated marker. We also consider two types of error controls: the discovery number control and the Bonferroni type I error rate control. After demonstrating the accuracy of our analytical results by simulations, we apply them to consider a broad genetic model space to investigate the relative performances of different model search strategies. Our analytical study provides rapid computation as well as insights into the statistical mechanism of capturing genetic signals under different genetic models including gene-gene interactions. Even though we focus on genetic association analysis, our results on the power of model selection procedures are clearly very general and applicable to other studies.

  16. ON MODEL SELECTION STRATEGIES TO IDENTIFY GENES UNDERLYING BINARY TRAITS USING GENOME-WIDE ASSOCIATION DATA

    PubMed Central

    Wu, Zheyang; Zhao, Hongyu

    2013-01-01

    For more fruitful discoveries of genetic variants associated with diseases in genome-wide association studies, it is important to know whether joint analysis of multiple markers is more powerful than the commonly used single-marker analysis, especially in the presence of gene-gene interactions. This article provides a statistical framework to rigorously address this question through analytical power calculations for common model search strategies to detect binary trait loci: marginal search, exhaustive search, forward search, and two-stage screening search. Our approach incorporates linkage disequilibrium, random genotypes, and correlations among score test statistics of logistic regressions. We derive analytical results under two power definitions: the power of finding all the associated markers and the power of finding at least one associated marker. We also consider two types of error controls: the discovery number control and the Bonferroni type I error rate control. After demonstrating the accuracy of our analytical results by simulations, we apply them to consider a broad genetic model space to investigate the relative performances of different model search strategies. Our analytical study provides rapid computation as well as insights into the statistical mechanism of capturing genetic signals under different genetic models including gene-gene interactions. Even though we focus on genetic association analysis, our results on the power of model selection procedures are clearly very general and applicable to other studies. PMID:23956610

  17. Over-Distribution in Source Memory

    PubMed Central

    Brainerd, C. J.; Reyna, V. F.; Holliday, R. E.; Nakamura, K.

    2012-01-01

    Semantic false memories are confounded with a second type of error, over-distribution, in which items are attributed to contradictory episodic states. Over-distribution errors have proved to be more common than false memories when the two are disentangled. We investigated whether over-distribution is prevalent in another classic false memory paradigm: source monitoring. It is. Conventional false memory responses (source misattributions) were predominantly over-distribution errors, but unlike semantic false memory, over-distribution also accounted for more than half of true memory responses (correct source attributions). Experimental control of over-distribution was achieved via a series of manipulations that affected either recollection of contextual details or item memory (concreteness, frequency, list-order, number of presentation contexts, and individual differences in verbatim memory). A theoretical model was used to analyze the data (conjoint process dissociation) that predicts that predicts that (a) over-distribution is directly proportional to item memory but inversely proportional to recollection and (b) item memory is not a necessary precondition for recollection of contextual details. The results were consistent with both predictions. PMID:21942494

  18. Electrocardiograms with pacemakers: accuracy of computer reading.

    PubMed

    Guglin, Maya E; Datwani, Neeta

    2007-04-01

    We analyzed the accuracy with which a computer algorithm reads electrocardiograms (ECGs) with electronic pacemakers (PMs). Electrocardiograms were screened for the presence of electronic pacing spikes. Computer-derived interpretations were compared with cardiologists' readings. Computer-drawn interpretations required revision by cardiologists in 61.3% of cases. In 18.4% of cases, the ECG reading algorithm failed to recognize the presence of a PM. The misinterpretation of paced beats as intrinsic beats led to multiple secondary errors, including myocardial infarctions in varying localization. The most common error in computer reading was the failure to identify an underlying rhythm. This error caused frequent misidentification of the PM type, especially when the presence of normal sinus rhythm was not recognized in a tracing with a DDD PM tracking the atrial activity. The increasing number of pacing devices, and the resulting number of ECGs with pacing spikes, mandates the refining of ECG reading algorithms. Improvement is especially needed in the recognition of the underlying rhythm, pacing spikes, and mode of pacing.

  19. Integration of imagery and cartographic data through a common map base

    NASA Technical Reports Server (NTRS)

    Clark, J.

    1983-01-01

    Several disparate data types are integrated by using control points as the basis for spatially registering the data to a map base. The data are reprojected to match the coordinates of the reference UTM (Universal Transverse Mercator) map projection, as expressed in lines and samples. Control point selection is the most critical aspect of integrating the Thematic Mapper Simulator MSS imagery with the cartographic data. It is noted that control points chosen from the imagery are subject to error from mislocated points, either points that did not correlate well to the reference map or minor pixel offsets because of interactive cursorring errors. Errors are also introduced in map control points when points are improperly located and digitized, leading to inaccurate latitude and longitude coordinates. Nonsystematic aircraft platform variations, such as yawl, pitch, and roll, affect the spatial fidelity of the imagery in comparison with the quadrangles. Features in adjacent flight paths do not always correspond properly owing to the systematic panorama effect and alteration of flightline direction, as well as platform variations.

  20. Combating omission errors through task analysis and good reminders.

    PubMed

    Reason, J

    2002-03-01

    Leaving out necessary task steps is the single most common human error type. Certain task steps possess characteristics that are more likely to provoke omissions than others, and can be identified in advance. The paper reports two studies. The first, involving a simple photocopier, established that failing to remove the last page of the original is the commonest omission. This step possesses four distinct error-provoking features that combine their effects in an additive fashion. The second study examined the degree to which everyday memory aids satisfy five features of a good reminder: conspicuity, contiguity, content, context, and countability. A close correspondence was found between the percentage use of strategies and the degree to which they satisfied these five criteria. A three stage omission management programme was outlined: task analysis (identifying discrete task steps) of some safety critical activity; assessing the omission likelihood of each step; and the choice and application of a suitable reminder. Such a programme is applicable to a variety of healthcare procedures.

  1. Metacognition and proofreading: the roles of aging, motivation, and interest.

    PubMed

    Hargis, Mary B; Yue, Carole L; Kerr, Tyson; Ikeda, Kenji; Murayama, Kou; Castel, Alan D

    2017-03-01

    The current study examined younger and older adults' error detection accuracy, prediction calibration, and postdiction calibration on a proofreading task, to determine if age-related differences would be present in this type of common error detection task. Participants were given text passages, and were first asked to predict the percentage of errors they would detect in the passage. They then read the passage and circled errors (which varied in complexity and locality), and made postdictions regarding their performance, before repeating this with another passage and answering a comprehension test of both passages. There were no age-related differences in error detection accuracy, text comprehension, or metacognitive calibration, though participants in both age groups were overconfident overall in their metacognitive judgments. Both groups gave similar ratings of motivation to complete the task. The older adults rated the passages as more interesting than younger adults did, although this level of interest did not appear to influence error-detection performance. The age equivalence in both proofreading ability and calibration suggests that the ability to proofread text passages and the associated metacognitive monitoring used in judging one's own performance are maintained in aging. These age-related similarities persisted when younger adults completed the proofreading tasks on a computer screen, rather than with paper and pencil. The findings provide novel insights regarding the influence that cognitive aging may have on metacognitive accuracy and text processing in an everyday task.

  2. Impact of pharmacy technician-centered medication reconciliation on optimization of antiretroviral therapy and opportunistic infection prophylaxis in hospitalized patients with HIV/AIDS.

    PubMed

    Siemianowski, Laura A; Sen, Sanchita; George, Jomy M

    2013-08-01

    This study aimed to examine the role of a pharmacy technician-centered medication reconciliation (PTMR) program in optimization of medication therapy in hospitalized patients with HIV/AIDS. A chart review was conducted for all inpatients that had a medication reconciliation performed by the PTMR program. Adult patients with HIV and antiretroviral therapy (ART) and/or the opportunistic infection (OI) prophylaxis listed on the medication reconciliation form were included. The primary objective is to describe the (1) number and types of medication errors and (2) the percentage of patients who received appropriate ART. The secondary objective is a comparison of the number of medication errors between standard mediation reconciliation and a pharmacy-led program. In the PTMR period, 55 admissions were evaluated. In all, 50% of the patients received appropriate ART. In 27of the 55 admissions, there were 49 combined ART and OI-related errors. The most common ART-related errors were drug-drug interactions. The incidence of ART-related medication errors that included drug-drug interactions and renal dosing adjustments were similar between the pre-PTMR and PTMR groups (P = .0868). Of the 49 errors in the PTMR group, 18 were intervened by a medication reconciliation pharmacist. A PTMR program has a positive impact on optimizing ART and OI prophylaxis in patients with HIV/AIDS.

  3. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE PAGES

    Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...

    2017-11-08

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  4. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Xin; Garikapati, Venu M.; You, Daehyun

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  5. Quantitative evaluation of patient-specific quality assurance using online dosimetry system

    NASA Astrophysics Data System (ADS)

    Jung, Jae-Yong; Shin, Young-Ju; Sohn, Seung-Chang; Min, Jung-Whan; Kim, Yon-Lae; Kim, Dong-Su; Choe, Bo-Young; Suh, Tae-Suk

    2018-01-01

    In this study, we investigated the clinical performance of an online dosimetry system (Mobius FX system, MFX) by 1) dosimetric plan verification using gamma passing rates and dose volume metrics and 2) error-detection capability evaluation by deliberately introduced machine error. Eighteen volumetric modulated arc therapy (VMAT) plans were studied. To evaluate the clinical performance of the MFX, we used gamma analysis and dose volume histogram (DVH) analysis. In addition, to evaluate the error-detection capability, we used gamma analysis and DVH analysis utilizing three types of deliberately introduced errors (Type 1: gantry angle-independent multi-leaf collimator (MLC) error, Type 2: gantry angle-dependent MLC error, and Type 3: gantry angle error). A dosimetric verification comparison of physical dosimetry system (Delt4PT) and online dosimetry system (MFX), gamma passing rates of the two dosimetry systems showed very good agreement with treatment planning system (TPS) calculation. For the average dose difference between the TPS calculation and the MFX measurement, most of the dose metrics showed good agreement within a tolerance of 3%. For the error-detection comparison of Delta4PT and MFX, the gamma passing rates of the two dosimetry systems did not meet the 90% acceptance criterion with the magnitude of error exceeding 2 mm and 1.5 ◦, respectively, for error plans of Types 1, 2, and 3. For delivery with all error types, the average dose difference of PTV due to error magnitude showed good agreement between calculated TPS and measured MFX within 1%. Overall, the results of the online dosimetry system showed very good agreement with those of the physical dosimetry system. Our results suggest that a log file-based online dosimetry system is a very suitable verification tool for accurate and efficient clinical routines for patient-specific quality assurance (QA).

  6. The Language of Scholarship: How to Rapidly Locate and Avoid Common APA Errors.

    PubMed

    Freysteinson, Wyona M; Krepper, Rebecca; Mellott, Susan

    2015-10-01

    This article is relevant for nurses and nursing students who are writing scholarly documents for work, school, or publication and who have a basic understanding of American Psychological Association (APA) style. Common APA errors on the reference list and in citations within the text are reviewed. Methods to quickly find and reduce those errors are shared. Copyright 2015, SLACK Incorporated.

  7. What Do Spelling Errors Tell Us? Classification and Analysis of Errors Made by Greek Schoolchildren with and without Dyslexia

    ERIC Educational Resources Information Center

    Protopapas, Athanassios; Fakou, Aikaterini; Drakopoulou, Styliani; Skaloumbakas, Christos; Mouzaki, Angeliki

    2013-01-01

    In this study we propose a classification system for spelling errors and determine the most common spelling difficulties of Greek children with and without dyslexia. Spelling skills of 542 children from the general population and 44 children with dyslexia, Grades 3-4 and 7, were assessed with a dictated common word list and age-appropriate…

  8. Generalized Structured Component Analysis with Uniqueness Terms for Accommodating Measurement Error

    PubMed Central

    Hwang, Heungsun; Takane, Yoshio; Jung, Kwanghee

    2017-01-01

    Generalized structured component analysis (GSCA) is a component-based approach to structural equation modeling (SEM), where latent variables are approximated by weighted composites of indicators. It has no formal mechanism to incorporate errors in indicators, which in turn renders components prone to the errors as well. We propose to extend GSCA to account for errors in indicators explicitly. This extension, called GSCAM, considers both common and unique parts of indicators, as postulated in common factor analysis, and estimates a weighted composite of indicators with their unique parts removed. Adding such unique parts or uniqueness terms serves to account for measurement errors in indicators in a manner similar to common factor analysis. Simulation studies are conducted to compare parameter recovery of GSCAM and existing methods. These methods are also applied to fit a substantively well-established model to real data. PMID:29270146

  9. Endodontic complications of root canal therapy performed by dental students with stainless-steel K-files and nickel-titanium hand files.

    PubMed

    Pettiette, M T; Metzger, Z; Phillips, C; Trope, M

    1999-04-01

    Straightening of curved canals is one of the most common procedural errors in endodontic instrumentation. This problem is commonly encountered when dental students perform molar endodontics. The purpose of this study was to compare the effect of the type of instrument used by these students on the extent of straightening and on the incidence of other endodontic procedural errors. Nickel-titanium 0.02 taper hand files were compared with traditional stainless-steel 0.02 taper K-files. Sixty molar teeth comprised of maxillary and mandibular first and second molars were treated by senior dental students. Instrumentation was with either nickel-titanium hand files or stainless-steel K-files. Preoperative and postoperative radiographs of each tooth were taken using an XCP precision instrument with a customized bite block to ensure accurate reproduction of radiographic angulation. The radiographs were scanned and the images stored as TIFF files. By superimposing tracings from the preoperative over the postoperative radiographs, the degree of deviation of the apical third of the root canal filling from the original canal was measured. The presence of other errors, such as strip perforation and instrument breakage, was established by examining the radiographs. In curved canals instrumented by stainless-steel K-files, the average deviation of the apical third of the canals was 14.44 degrees (+/- 10.33 degrees). The deviation was significantly reduced when nickel-titanium hand files were used to an average of 4.39 degrees (+/- 4.53 degrees). The incidence of other procedural errors was also significantly reduced by the use of nickel-titanium hand files.

  10. The good, the bad and the outliers: automated detection of errors and outliers from groundwater hydrographs

    NASA Astrophysics Data System (ADS)

    Peterson, Tim J.; Western, Andrew W.; Cheng, Xiang

    2018-03-01

    Suspicious groundwater-level observations are common and can arise for many reasons ranging from an unforeseen biophysical process to bore failure and data management errors. Unforeseen observations may provide valuable insights that challenge existing expectations and can be deemed outliers, while monitoring and data handling failures can be deemed errors, and, if ignored, may compromise trend analysis and groundwater model calibration. Ideally, outliers and errors should be identified but to date this has been a subjective process that is not reproducible and is inefficient. This paper presents an approach to objectively and efficiently identify multiple types of errors and outliers. The approach requires only the observed groundwater hydrograph, requires no particular consideration of the hydrogeology, the drivers (e.g. pumping) or the monitoring frequency, and is freely available in the HydroSight toolbox. Herein, the algorithms and time-series model are detailed and applied to four observation bores with varying dynamics. The detection of outliers was most reliable when the observation data were acquired quarterly or more frequently. Outlier detection where the groundwater-level variance is nonstationary or the absolute trend increases rapidly was more challenging, with the former likely to result in an under-estimation of the number of outliers and the latter an overestimation in the number of outliers.

  11. Cognitive tests predict real-world errors: the relationship between drug name confusion rates in laboratory-based memory and perception tests and corresponding error rates in large pharmacy chains

    PubMed Central

    Schroeder, Scott R; Salomon, Meghan M; Galanter, William L; Schiff, Gordon D; Vaida, Allen J; Gaunt, Michael J; Bryson, Michelle L; Rash, Christine; Falck, Suzanne; Lambert, Bruce L

    2017-01-01

    Background Drug name confusion is a common type of medication error and a persistent threat to patient safety. In the USA, roughly one per thousand prescriptions results in the wrong drug being filled, and most of these errors involve drug names that look or sound alike. Prior to approval, drug names undergo a variety of tests to assess their potential for confusability, but none of these preapproval tests has been shown to predict real-world error rates. Objectives We conducted a study to assess the association between error rates in laboratory-based tests of drug name memory and perception and real-world drug name confusion error rates. Methods Eighty participants, comprising doctors, nurses, pharmacists, technicians and lay people, completed a battery of laboratory tests assessing visual perception, auditory perception and short-term memory of look-alike and sound-alike drug name pairs (eg, hydroxyzine/hydralazine). Results Laboratory test error rates (and other metrics) significantly predicted real-world error rates obtained from a large, outpatient pharmacy chain, with the best-fitting model accounting for 37% of the variance in real-world error rates. Cross-validation analyses confirmed these results, showing that the laboratory tests also predicted errors from a second pharmacy chain, with 45% of the variance being explained by the laboratory test data. Conclusions Across two distinct pharmacy chains, there is a strong and significant association between drug name confusion error rates observed in the real world and those observed in laboratory-based tests of memory and perception. Regulators and drug companies seeking a validated preapproval method for identifying confusing drug names ought to consider using these simple tests. By using a standard battery of memory and perception tests, it should be possible to reduce the number of confusing look-alike and sound-alike drug name pairs that reach the market, which will help protect patients from potentially harmful medication errors. PMID:27193033

  12. Death Certification Errors and the Effect on Mortality Statistics.

    PubMed

    McGivern, Lauri; Shulman, Leanne; Carney, Jan K; Shapiro, Steven; Bundock, Elizabeth

    Errors in cause and manner of death on death certificates are common and affect families, mortality statistics, and public health research. The primary objective of this study was to characterize errors in the cause and manner of death on death certificates completed by non-Medical Examiners. A secondary objective was to determine the effects of errors on national mortality statistics. We retrospectively compared 601 death certificates completed between July 1, 2015, and January 31, 2016, from the Vermont Electronic Death Registration System with clinical summaries from medical records. Medical Examiners, blinded to original certificates, reviewed summaries, generated mock certificates, and compared mock certificates with original certificates. They then graded errors using a scale from 1 to 4 (higher numbers indicated increased impact on interpretation of the cause) to determine the prevalence of minor and major errors. They also compared International Classification of Diseases, 10th Revision (ICD-10) codes on original certificates with those on mock certificates. Of 601 original death certificates, 319 (53%) had errors; 305 (51%) had major errors; and 59 (10%) had minor errors. We found no significant differences by certifier type (physician vs nonphysician). We did find significant differences in major errors in place of death ( P < .001). Certificates for deaths occurring in hospitals were more likely to have major errors than certificates for deaths occurring at a private residence (59% vs 39%, P < .001). A total of 580 (93%) death certificates had a change in ICD-10 codes between the original and mock certificates, of which 348 (60%) had a change in the underlying cause-of-death code. Error rates on death certificates in Vermont are high and extend to ICD-10 coding, thereby affecting national mortality statistics. Surveillance and certifier education must expand beyond local and state efforts. Simplifying and standardizing underlying literal text for cause of death may improve accuracy, decrease coding errors, and improve national mortality statistics.

  13. The use of a contextual, modal and psychological classification of medication errors in the emergency department: a retrospective descriptive study.

    PubMed

    Cabilan, C J; Hughes, James A; Shannon, Carl

    2017-12-01

    To describe the contextual, modal and psychological classification of medication errors in the emergency department to know the factors associated with the reported medication errors. The causes of medication errors are unique in every clinical setting; hence, error minimisation strategies are not always effective. For this reason, it is fundamental to understand the causes specific to the emergency department so that targeted strategies can be implemented. Retrospective analysis of reported medication errors in the emergency department. All voluntarily staff-reported medication-related incidents from 2010-2015 from the hospital's electronic incident management system were retrieved for analysis. Contextual classification involved the time, place and the type of medications involved. Modal classification pertained to the stage and issue (e.g. wrong medication, wrong patient). Psychological classification categorised the errors in planning (knowledge-based and rule-based errors) and skill (slips and lapses). There were 405 errors reported. Most errors occurred in the acute care area, short-stay unit and resuscitation area, during the busiest shifts (0800-1559, 1600-2259). Half of the errors involved high-alert medications. Many of the errors occurred during administration (62·7%), prescribing (28·6%) and commonly during both stages (18·5%). Wrong dose, wrong medication and omission were the issues that dominated. Knowledge-based errors characterised the errors that occurred in prescribing and administration. The highest proportion of slips (79·5%) and lapses (76·1%) occurred during medication administration. It is likely that some of the errors occurred due to the lack of adherence to safety protocols. Technology such as computerised prescribing, barcode medication administration and reminder systems could potentially decrease the medication errors in the emergency department. There was a possibility that some of the errors could be prevented if safety protocols were adhered to, which highlights the need to also address clinicians' attitudes towards safety. Technology can be implemented to help minimise errors in the ED, but this must be coupled with efforts to enhance the culture of safety. © 2017 John Wiley & Sons Ltd.

  14. Error Analysis of Indonesian Junior High School Student in Solving Space and Shape Content PISA Problem Using Newman Procedure

    NASA Astrophysics Data System (ADS)

    Sumule, U.; Amin, S. M.; Fuad, Y.

    2018-01-01

    This study aims to determine the types and causes of errors, as well as efforts being attempted to overcome the mistakes made by junior high school students in completing PISA content space and shape. Two subjects were selected based on the mathematical ability test results with the most error, yet they are able to communicate orally and in writing. Two selected subjects then worked on the PISA ability test question and the subjects were interviewed to find out the type and cause of the error and then given a scaffolding based on the type of mistake made.The results of this study obtained the type of error that students do are comprehension and transformation error. The reasons are students was not able to identify the keywords in the question, write down what is known or given, specify formulas or device a plan. To overcome this error, students were given scaffolding. Scaffolding that given to overcome misunderstandings were reviewing and restructuring. While to overcome the transformation error, scaffolding given were reviewing, restructuring, explaining and developing representational tools. Teachers are advised to use scaffolding to resolve errors so that the students are able to avoid these errors.

  15. Data Mining on Numeric Error in Computerized Physician Order Entry System Prescriptions.

    PubMed

    Wu, Xue; Wu, Changxu

    2017-01-01

    This study revealed the numeric error patterns related to dosage when doctors prescribed in computerized physician order entry system. Error categories showed that the '6','7', and '9' key produced a higher incidence of errors in Numpad typing, while the '2','3', and '0' key produced a higher incidence of errors in main keyboard digit line typing. Errors categorized as omission and substitution were higher in prevalence than transposition and intrusion.

  16. Using technology to prevent adverse drug events in the intensive care unit.

    PubMed

    Hassan, Erkan; Badawi, Omar; Weber, Robert J; Cohen, Henry

    2010-06-01

    Critically ill patients are particularly susceptible to adverse drug events (ADEs) due to their rapidly changing and unstable physiology, complex therapeutic regimens, and large percentage of medications administered intravenously. There are a wide variety of technologies that can help prevent the points of failure commonly associated with ADEs (i.e., the five "Rights": right patient; right drug; right route; right dose; right frequency). These technologies are often categorized by their degree of complexity to design and engineer and the type of error they are designed to prevent. Focusing solely on the software and hardware design of technology may over- or underestimate the degree of difficulty to avoid ADEs at the bedside. Alternatively, we propose categorizing technological solutions by identifying the factors essential for success. The two major critical success factors are: 1) the degree of clinical assessment required by the clinician to appropriately evaluate and disposition the issue identified by a technology; and 2) the complexity associated with effective implementation. This classification provides a way of determining how ADE-preventing technologies in the intensive care unit can be successfully integrated into clinical practice. Although there are limited data on the effectiveness of many technologies in reducing ADEs, we will review the technologies currently available in the intensive care unit environment. We will also discuss critical success factors for implementation, common errors made during implementation, and the potential errors using these systems.

  17. A Mixture Modeling Framework for Differential Analysis of High-Throughput Data

    PubMed Central

    Taslim, Cenny; Lin, Shili

    2014-01-01

    The inventions of microarray and next generation sequencing technologies have revolutionized research in genomics; platforms have led to massive amount of data in gene expression, methylation, and protein-DNA interactions. A common theme among a number of biological problems using high-throughput technologies is differential analysis. Despite the common theme, different data types have their own unique features, creating a “moving target” scenario. As such, methods specifically designed for one data type may not lead to satisfactory results when applied to another data type. To meet this challenge so that not only currently existing data types but also data from future problems, platforms, or experiments can be analyzed, we propose a mixture modeling framework that is flexible enough to automatically adapt to any moving target. More specifically, the approach considers several classes of mixture models and essentially provides a model-based procedure whose model is adaptive to the particular data being analyzed. We demonstrate the utility of the methodology by applying it to three types of real data: gene expression, methylation, and ChIP-seq. We also carried out simulations to gauge the performance and showed that the approach can be more efficient than any individual model without inflating type I error. PMID:25057284

  18. Impact of Internally Developed Electronic Prescription on Prescribing Errors at Discharge from the Emergency Department

    PubMed Central

    Hitti, Eveline; Tamim, Hani; Bakhti, Rinad; Zebian, Dina; Mufarrij, Afif

    2017-01-01

    Introduction Medication errors are common, with studies reporting at least one error per patient encounter. At hospital discharge, medication errors vary from 15%–38%. However, studies assessing the effect of an internally developed electronic (E)-prescription system at discharge from an emergency department (ED) are comparatively minimal. Additionally, commercially available electronic solutions are cost-prohibitive in many resource-limited settings. We assessed the impact of introducing an internally developed, low-cost E-prescription system, with a list of commonly prescribed medications, on prescription error rates at discharge from the ED, compared to handwritten prescriptions. Methods We conducted a pre- and post-intervention study comparing error rates in a randomly selected sample of discharge prescriptions (handwritten versus electronic) five months pre and four months post the introduction of the E-prescription. The internally developed, E-prescription system included a list of 166 commonly prescribed medications with the generic name, strength, dose, frequency and duration. We included a total of 2,883 prescriptions in this study: 1,475 in the pre-intervention phase were handwritten (HW) and 1,408 in the post-intervention phase were electronic. We calculated rates of 14 different errors and compared them between the pre- and post-intervention period. Results Overall, E-prescriptions included fewer prescription errors as compared to HW-prescriptions. Specifically, E-prescriptions reduced missing dose (11.3% to 4.3%, p <0.0001), missing frequency (3.5% to 2.2%, p=0.04), missing strength errors (32.4% to 10.2%, p <0.0001) and legibility (0.7% to 0.2%, p=0.005). E-prescriptions, however, were associated with a significant increase in duplication errors, specifically with home medication (1.7% to 3%, p=0.02). Conclusion A basic, internally developed E-prescription system, featuring commonly used medications, effectively reduced medication errors in a low-resource setting where the costs of sophisticated commercial electronic solutions are prohibitive. PMID:28874948

  19. Impact of Internally Developed Electronic Prescription on Prescribing Errors at Discharge from the Emergency Department.

    PubMed

    Hitti, Eveline; Tamim, Hani; Bakhti, Rinad; Zebian, Dina; Mufarrij, Afif

    2017-08-01

    Medication errors are common, with studies reporting at least one error per patient encounter. At hospital discharge, medication errors vary from 15%-38%. However, studies assessing the effect of an internally developed electronic (E)-prescription system at discharge from an emergency department (ED) are comparatively minimal. Additionally, commercially available electronic solutions are cost-prohibitive in many resource-limited settings. We assessed the impact of introducing an internally developed, low-cost E-prescription system, with a list of commonly prescribed medications, on prescription error rates at discharge from the ED, compared to handwritten prescriptions. We conducted a pre- and post-intervention study comparing error rates in a randomly selected sample of discharge prescriptions (handwritten versus electronic) five months pre and four months post the introduction of the E-prescription. The internally developed, E-prescription system included a list of 166 commonly prescribed medications with the generic name, strength, dose, frequency and duration. We included a total of 2,883 prescriptions in this study: 1,475 in the pre-intervention phase were handwritten (HW) and 1,408 in the post-intervention phase were electronic. We calculated rates of 14 different errors and compared them between the pre- and post-intervention period. Overall, E-prescriptions included fewer prescription errors as compared to HW-prescriptions. Specifically, E-prescriptions reduced missing dose (11.3% to 4.3%, p <0.0001), missing frequency (3.5% to 2.2%, p=0.04), missing strength errors (32.4% to 10.2%, p <0.0001) and legibility (0.7% to 0.2%, p=0.005). E-prescriptions, however, were associated with a significant increase in duplication errors, specifically with home medication (1.7% to 3%, p=0.02). A basic, internally developed E-prescription system, featuring commonly used medications, effectively reduced medication errors in a low-resource setting where the costs of sophisticated commercial electronic solutions are prohibitive.

  20. Medical error and related factors during internship and residency.

    PubMed

    Ahmadipour, Habibeh; Nahid, Mortazavi

    2015-01-01

    It is difficult to determine the real incidence of medical errors due to the lack of a precise definition of errors, as well as the failure to report them under certain circumstances. We carried out a cross- sectional study in Kerman University of Medical Sciences, Iran in 2013. The participants were selected through the census method. The data were collected using a self-administered questionnaire, which consisted of questions on the participants' demographic data and questions on the medical errors committed. The data were analysed by SPSS 19. It was found that 270 participants had committed medical errors. There was no significant difference in the frequency of errors committed by interns and residents. In the case of residents, the most common error was misdiagnosis and in that of interns, errors related to history-taking and physical examination. Considering that medical errors are common in the clinical setting, the education system should train interns and residents to prevent the occurrence of errors. In addition, the system should develop a positive attitude among them so that they can deal better with medical errors.

  1. Introduction to the Application of Web-Based Surveys.

    ERIC Educational Resources Information Center

    Timmerman, Annemarie

    This paper discusses some basic assumptions and issues concerning web-based surveys. Discussion includes: assumptions regarding cost and ease of use; disadvantages of web-based surveys, concerning the inability to compensate for four common errors of survey research: coverage error, sampling error, measurement error and nonresponse error; and…

  2. 42 CFR 431.960 - Types of payment errors.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Estimating Improper Payments in Medicaid and CHIP § 431.960 Types of payment errors. (a) General rule. State or provider errors identified for the Medicaid and CHIP improper payments measurement under the... been paid by a third party but were inappropriately paid by Medicaid or CHIP. (v) Pricing errors. (vi...

  3. 42 CFR 431.960 - Types of payment errors.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Estimating Improper Payments in Medicaid and CHIP § 431.960 Types of payment errors. (a) General rule. State or provider errors identified for the Medicaid and CHIP improper payments measurement under the... been paid by a third party but were inappropriately paid by Medicaid or CHIP. (v) Pricing errors. (vi...

  4. 42 CFR 431.960 - Types of payment errors.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Estimating Improper Payments in Medicaid and CHIP § 431.960 Types of payment errors. (a) General rule. State or provider errors identified for the Medicaid and CHIP improper payments measurement under the... been paid by a third party but were inappropriately paid by Medicaid or CHIP. (v) Pricing errors. (vi...

  5. 42 CFR 431.960 - Types of payment errors.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Estimating Improper Payments in Medicaid and CHIP § 431.960 Types of payment errors. (a) General rule. State or provider errors identified for the Medicaid and CHIP improper payments measurement under the... been paid by a third party but were inappropriately paid by Medicaid or CHIP. (v) Pricing errors. (vi...

  6. Attitude errors arising from antenna/satellite altitude errors - Recognition and reduction

    NASA Technical Reports Server (NTRS)

    Godbey, T. W.; Lambert, R.; Milano, G.

    1972-01-01

    A review is presented of the three basic types of pulsed radar altimeter designs, as well as the source and form of altitude bias errors arising from antenna/satellite attitude errors in each design type. A quantitative comparison of the three systems was also made.

  7. Slow Learner Errors Analysis in Solving Fractions Problems in Inclusive Junior High School Class

    NASA Astrophysics Data System (ADS)

    Novitasari, N.; Lukito, A.; Ekawati, R.

    2018-01-01

    A slow learner whose IQ is between 71 and 89 will have difficulties in solving mathematics problems that often lead to errors. The errors could be analyzed to where the errors may occur and its type. This research is qualitative descriptive which aims to describe the locations, types, and causes of slow learner errors in the inclusive junior high school class in solving the fraction problem. The subject of this research is one slow learner of seventh-grade student which was selected through direct observation by the researcher and through discussion with mathematics teacher and special tutor which handles the slow learner students. Data collection methods used in this study are written tasks and semistructured interviews. The collected data was analyzed by Newman’s Error Analysis (NEA). Results show that there are four locations of errors, namely comprehension, transformation, process skills, and encoding errors. There are four types of errors, such as concept, principle, algorithm, and counting errors. The results of this error analysis will help teachers to identify the causes of the errors made by the slow learner.

  8. Secondary data analysis of large data sets in urology: successes and errors to avoid.

    PubMed

    Schlomer, Bruce J; Copp, Hillary L

    2014-03-01

    Secondary data analysis is the use of data collected for research by someone other than the investigator. In the last several years there has been a dramatic increase in the number of these studies being published in urological journals and presented at urological meetings, especially involving secondary data analysis of large administrative data sets. Along with this expansion, skepticism for secondary data analysis studies has increased for many urologists. In this narrative review we discuss the types of large data sets that are commonly used for secondary data analysis in urology, and discuss the advantages and disadvantages of secondary data analysis. A literature search was performed to identify urological secondary data analysis studies published since 2008 using commonly used large data sets, and examples of high quality studies published in high impact journals are given. We outline an approach for performing a successful hypothesis or goal driven secondary data analysis study and highlight common errors to avoid. More than 350 secondary data analysis studies using large data sets have been published on urological topics since 2008 with likely many more studies presented at meetings but never published. Nonhypothesis or goal driven studies have likely constituted some of these studies and have probably contributed to the increased skepticism of this type of research. However, many high quality, hypothesis driven studies addressing research questions that would have been difficult to conduct with other methods have been performed in the last few years. Secondary data analysis is a powerful tool that can address questions which could not be adequately studied by another method. Knowledge of the limitations of secondary data analysis and of the data sets used is critical for a successful study. There are also important errors to avoid when planning and performing a secondary data analysis study. Investigators and the urological community need to strive to use secondary data analysis of large data sets appropriately to produce high quality studies that hopefully lead to improved patient outcomes. Copyright © 2014 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  9. Color filter array design based on a human visual model

    NASA Astrophysics Data System (ADS)

    Parmar, Manu; Reeves, Stanley J.

    2004-05-01

    To reduce cost and complexity associated with registering multiple color sensors, most consumer digital color cameras employ a single sensor. A mosaic of color filters is overlaid on a sensor array such that only one color channel is sampled per pixel location. The missing color values must be reconstructed from available data before the image is displayed. The quality of the reconstructed image depends fundamentally on the array pattern and the reconstruction technique. We present a design method for color filter array patterns that use red, green, and blue color channels in an RGB array. A model of the human visual response for luminance and opponent chrominance channels is used to characterize the perceptual error between a fully sampled and a reconstructed sparsely-sampled image. Demosaicking is accomplished using Wiener reconstruction. To ensure that the error criterion reflects perceptual effects, reconstruction is done in a perceptually uniform color space. A sequential backward selection algorithm is used to optimize the error criterion to obtain the sampling arrangement. Two different types of array patterns are designed: non-periodic and periodic arrays. The resulting array patterns outperform commonly used color filter arrays in terms of the error criterion.

  10. ERP correlates of error processing during performance on the Halstead Category Test.

    PubMed

    Santos, I M; Teixeira, A R; Tomé, A M; Pereira, A T; Rodrigues, P; Vagos, P; Costa, J; Carrito, M L; Oliveira, B; DeFilippis, N A; Silva, C F

    2016-08-01

    The Halstead Category Test (HCT) is a neuropsychological test that measures a person's ability to formulate and apply abstract principles. Performance must be adjusted based on feedback after each trial and errors are common until the underlying rules are discovered. Event-related potential (ERP) studies associated with the HCT are lacking. This paper demonstrates the use of a methodology inspired on Singular Spectrum Analysis (SSA) applied to EEG signals, to remove high amplitude ocular and movement artifacts during performance on the test. This filtering technique introduces no phase or latency distortions, with minimum loss of relevant EEG information. Importantly, the test was applied in its original clinical format, without introducing adaptations to ERP recordings. After signal treatment, the feedback-related negativity (FRN) wave, which is related to error-processing, was identified. This component peaked around 250ms, after feedback, in fronto-central electrodes. As expected, errors elicited more negative amplitudes than correct responses. Results are discussed in terms of the increased clinical potential that coupling ERP information with behavioral performance data can bring to the specificity of the HCT in diagnosing different types of impairment in frontal brain function. Copyright © 2016. Published by Elsevier B.V.

  11. Sources of error in the retracted scientific literature.

    PubMed

    Casadevall, Arturo; Steen, R Grant; Fang, Ferric C

    2014-09-01

    Retraction of flawed articles is an important mechanism for correction of the scientific literature. We recently reported that the majority of retractions are associated with scientific misconduct. In the current study, we focused on the subset of retractions for which no misconduct was identified, in order to identify the major causes of error. Analysis of the retraction notices for 423 articles indexed in PubMed revealed that the most common causes of error-related retraction are laboratory errors, analytical errors, and irreproducible results. The most common laboratory errors are contamination and problems relating to molecular biology procedures (e.g., sequencing, cloning). Retractions due to contamination were more common in the past, whereas analytical errors are now increasing in frequency. A number of publications that have not been retracted despite being shown to contain significant errors suggest that barriers to retraction may impede correction of the literature. In particular, few cases of retraction due to cell line contamination were found despite recognition that this problem has affected numerous publications. An understanding of the errors leading to retraction can guide practices to improve laboratory research and the integrity of the scientific literature. Perhaps most important, our analysis has identified major problems in the mechanisms used to rectify the scientific literature and suggests a need for action by the scientific community to adopt protocols that ensure the integrity of the publication process. © FASEB.

  12. Medication errors in chemotherapy preparation and administration: a survey conducted among oncology nurses in Turkey.

    PubMed

    Ulas, Arife; Silay, Kamile; Akinci, Sema; Dede, Didem Sener; Akinci, Muhammed Bulent; Sendur, Mehmet Ali Nahit; Cubukcu, Erdem; Coskun, Hasan Senol; Degirmenci, Mustafa; Utkan, Gungor; Ozdemir, Nuriye; Isikdogan, Abdurrahman; Buyukcelik, Abdullah; Inanc, Mevlude; Bilici, Ahmet; Odabasi, Hatice; Cihan, Sener; Avci, Nilufer; Yalcin, Bulent

    2015-01-01

    Medication errors in oncology may cause severe clinical problems due to low therapeutic indices and high toxicity of chemotherapeutic agents. We aimed to investigate unintentional medication errors and underlying factors during chemotherapy preparation and administration based on a systematic survey conducted to reflect oncology nurses experience. This study was conducted in 18 adult chemotherapy units with volunteer participation of 206 nurses. A survey developed by primary investigators and medication errors (MAEs) defined preventable errors during prescription of medication, ordering, preparation or administration. The survey consisted of 4 parts: demographic features of nurses; workload of chemotherapy units; errors and their estimated monthly number during chemotherapy preparation and administration; and evaluation of the possible factors responsible from ME. The survey was conducted by face to face interview and data analyses were performed with descriptive statistics. Chi-square or Fisher exact tests were used for a comparative analysis of categorical data. Some 83.4% of the 210 nurses reported one or more than one error during chemotherapy preparation and administration. Prescribing or ordering wrong doses by physicians (65.7%) and noncompliance with administration sequences during chemotherapy administration (50.5%) were the most common errors. The most common estimated average monthly error was not following the administration sequence of the chemotherapeutic agents (4.1 times/month, range 1-20). The most important underlying reasons for medication errors were heavy workload (49.7%) and insufficient number of staff (36.5%). Our findings suggest that the probability of medication error is very high during chemotherapy preparation and administration, the most common involving prescribing and ordering errors. Further studies must address the strategies to minimize medication error in chemotherapy receiving patients, determine sufficient protective measures and establishing multistep control mechanisms.

  13. Didn't You Run the Spell Checker? Effects of Type of Spelling Error and Use of a Spell Checker on Perceptions of the Author

    ERIC Educational Resources Information Center

    Figueredo, Lauren; Varnhagen, Connie K.

    2005-01-01

    We investigated expectations regarding a writer's responsibility to proofread text for spelling errors when using a word processor. Undergraduate students read an essay and completed a questionnaire regarding their perceptions of the author and the quality of the essay. They then manipulated type of spelling error (no error, homophone error,…

  14. Trends in Health Information Technology Safety: From Technology-Induced Errors to Current Approaches for Ensuring Technology Safety

    PubMed Central

    2013-01-01

    Objectives Health information technology (HIT) research findings suggested that new healthcare technologies could reduce some types of medical errors while at the same time introducing classes of medical errors (i.e., technology-induced errors). Technology-induced errors have their origins in HIT, and/or HIT contribute to their occurrence. The objective of this paper is to review current trends in the published literature on HIT safety. Methods A review and synthesis of the medical and life sciences literature focusing on the area of technology-induced error was conducted. Results There were four main trends in the literature on technology-induced error. The following areas were addressed in the literature: definitions of technology-induced errors; models, frameworks and evidence for understanding how technology-induced errors occur; a discussion of monitoring; and methods for preventing and learning about technology-induced errors. Conclusions The literature focusing on technology-induced errors continues to grow. Research has focused on the defining what an error is, models and frameworks used to understand these new types of errors, monitoring of such errors and methods that can be used to prevent these errors. More research will be needed to better understand and mitigate these types of errors. PMID:23882411

  15. Economic Value of Improved Accuracy for Self-Monitoring of Blood Glucose Devices for Type 1 and Type 2 Diabetes in England.

    PubMed

    McQueen, Robert Brett; Breton, Marc D; Craig, Joyce; Holmes, Hayden; Whittington, Melanie D; Ott, Markus A; Campbell, Jonathan D

    2018-04-01

    The objective was to model clinical and economic outcomes of self-monitoring blood glucose (SMBG) devices with varying error ranges and strip prices for type 1 and insulin-treated type 2 diabetes patients in England. We programmed a simulation model that included separate risk and complication estimates by type of diabetes and evidence from in silico modeling validated by the Food and Drug Administration. Changes in SMBG error were associated with changes in hemoglobin A1c (HbA1c) and separately, changes in hypoglycemia. Markov cohort simulation estimated clinical and economic outcomes. A SMBG device with 8.4% error and strip price of £0.30 (exceeding accuracy requirements by International Organization for Standardization [ISO] 15197:2013/EN ISO 15197:2015) was compared to a device with 15% error (accuracy meeting ISO 15197:2013/EN ISO 15197:2015) and price of £0.20. Outcomes were lifetime costs, quality-adjusted life years (QALYs) and incremental cost-effectiveness ratios (ICERs). With SMBG errors associated with changes in HbA1c only, the ICER was £3064 per QALY in type 1 diabetes and £264 668 per QALY in insulin-treated type 2 diabetes for an SMBG device with 8.4% versus 15% error. With SMBG errors associated with hypoglycemic events only, the device exceeding accuracy requirements was cost-saving and more effective in insulin-treated type 1 and type 2 diabetes. Investment in devices with higher strip prices but improved accuracy (less error) appears to be an efficient strategy for insulin-treated diabetes patients at high risk of severe hypoglycemia.

  16. The Relationship Between Technical Errors and Decision Making Skills in the Junior Resident

    PubMed Central

    Nathwani, J. N.; Fiers, R.M.; Ray, R.D.; Witt, A.K.; Law, K. E.; DiMarco, S.M.; Pugh, C.M.

    2017-01-01

    Objective The purpose of this study is to co-evaluate resident technical errors and decision-making capabilities during placement of a subclavian central venous catheter (CVC). We hypothesize that there will be significant correlations between scenario based decision making skills, and technical proficiency in central line insertion. We also predict residents will have problems in anticipating common difficulties and generating solutions associated with line placement. Design Participants were asked to insert a subclavian central line on a simulator. After completion, residents were presented with a real life patient photograph depicting CVC placement and asked to anticipate difficulties and generate solutions. Error rates were analyzed using chi-square tests and a 5% expected error rate. Correlations were sought by comparing technical errors and scenario based decision making. Setting This study was carried out at seven tertiary care centers. Participants Study participants (N=46) consisted of largely first year research residents that could be followed longitudinally. Second year research and clinical residents were not excluded. Results Six checklist errors were committed more often than anticipated. Residents performed an average of 1.9 errors, significantly more than the 1 error, at most, per person expected (t(44)=3.82, p<.001). The most common error was performance of the procedure steps in the wrong order (28.5%, P<.001). Some of the residents (24%) had no errors, 30% committed one error, and 46 % committed more than one error. The number of technical errors committed negatively correlated with the total number of commonly identified difficulties and generated solutions (r(33)= −.429, p=.021, r(33)= −.383, p=.044 respectively). Conclusions Almost half of the surgical residents committed multiple errors while performing subclavian CVC placement. The correlation between technical errors and decision making skills suggests a critical need to train residents in both technique and error management. ACGME Competencies Medical Knowledge, Practice Based Learning and Improvement, Systems Based Practice PMID:27671618

  17. P value and the theory of hypothesis testing: an explanation for new researchers.

    PubMed

    Biau, David Jean; Jolles, Brigitte M; Porcher, Raphaël

    2010-03-01

    In the 1920s, Ronald Fisher developed the theory behind the p value and Jerzy Neyman and Egon Pearson developed the theory of hypothesis testing. These distinct theories have provided researchers important quantitative tools to confirm or refute their hypotheses. The p value is the probability to obtain an effect equal to or more extreme than the one observed presuming the null hypothesis of no effect is true; it gives researchers a measure of the strength of evidence against the null hypothesis. As commonly used, investigators will select a threshold p value below which they will reject the null hypothesis. The theory of hypothesis testing allows researchers to reject a null hypothesis in favor of an alternative hypothesis of some effect. As commonly used, investigators choose Type I error (rejecting the null hypothesis when it is true) and Type II error (accepting the null hypothesis when it is false) levels and determine some critical region. If the test statistic falls into that critical region, the null hypothesis is rejected in favor of the alternative hypothesis. Despite similarities between the two, the p value and the theory of hypothesis testing are different theories that often are misunderstood and confused, leading researchers to improper conclusions. Perhaps the most common misconception is to consider the p value as the probability that the null hypothesis is true rather than the probability of obtaining the difference observed, or one that is more extreme, considering the null is true. Another concern is the risk that an important proportion of statistically significant results are falsely significant. Researchers should have a minimum understanding of these two theories so that they are better able to plan, conduct, interpret, and report scientific experiments.

  18. Effects of categorization method, regression type, and variable distribution on the inflation of Type-I error rate when categorizing a confounding variable.

    PubMed

    Barnwell-Ménard, Jean-Louis; Li, Qing; Cohen, Alan A

    2015-03-15

    The loss of signal associated with categorizing a continuous variable is well known, and previous studies have demonstrated that this can lead to an inflation of Type-I error when the categorized variable is a confounder in a regression analysis estimating the effect of an exposure on an outcome. However, it is not known how the Type-I error may vary under different circumstances, including logistic versus linear regression, different distributions of the confounder, and different categorization methods. Here, we analytically quantified the effect of categorization and then performed a series of 9600 Monte Carlo simulations to estimate the Type-I error inflation associated with categorization of a confounder under different regression scenarios. We show that Type-I error is unacceptably high (>10% in most scenarios and often 100%). The only exception was when the variable categorized was a continuous mixture proxy for a genuinely dichotomous latent variable, where both the continuous proxy and the categorized variable are error-ridden proxies for the dichotomous latent variable. As expected, error inflation was also higher with larger sample size, fewer categories, and stronger associations between the confounder and the exposure or outcome. We provide online tools that can help researchers estimate the potential error inflation and understand how serious a problem this is. Copyright © 2014 John Wiley & Sons, Ltd.

  19. Neuropsychological analysis of a typewriting disturbance following cerebral damage.

    PubMed

    Boyle, M; Canter, G J

    1987-01-01

    Following a left CVA, a skilled professional typist sustained a disturbance of typing disproportionate to her handwriting disturbance. Typing errors were predominantly of the sequencing type, with spatial errors much less frequent, suggesting that the impairment was based on a relatively early (premotor) stage of processing. Depriving the subject of visual feedback during handwriting greatly increased her error rate. Similarly, interfering with auditory feedback during speech substantially reduced her self-correction of speech errors. These findings suggested that impaired ability to utilize somesthetic information--probably caused by the subject's parietal lobe lesion--may have been the basis of the typing disorder.

  20. 1993 annual status report: a summary of fish data in six reaches of the upper Mississippi River system

    USGS Publications Warehouse

    Gutreuter, Steve; Burkhardt, Randy W.; Stopyro, Mark; Bartels, Andrew; Kramer, Eric; Bowler, Melvin C.; Cronin, Frederick A.; Soergel, Dirk W.; Petersen, Michael D.; Herzog, David P.; Raibley, Paul T.; Irons, Kevin S.; O'Hara, Timothy M.

    1997-01-01

    The Long Term Resource Monitoring Program (LTRMP) completed 1,994 collections of fishes from stratified random and permanently fixed sampling locations in six study reaches of the Upper Mississippi River System during 1993. Collection methods included day and night electrofishing, hoop netting, fyke netting (two net sizes), gill netting, seining, and trawling in select aquatic area classes. The six LTRMP study reaches are Pools 4 (excluding Lake Pepin), 8, 13, and 26 of the Upper Mississippi River, an unimpounded reach of the Mississippi River near Cape Girardeau, Missouri, and the La Grange Pool of the Illinois River. A total of 62-78 fish species were detected in each study reach. For each of the six LTRMP study reaches, this report contains summaries of: (1) sampling efforts in each combination of gear type and aquatic area class, (2) total catches of each species from each gear type, (3) mean catch-per-unit of gear effort statistics and standard errors for common species from each combination of aquatic area class and selected gear type, and (4) length distributions of common species from selected gear types.

  1. 1994 annual status report: a summary of fish data in six reaches of the upper Mississippi River system

    USGS Publications Warehouse

    Gutreuter, Steve; Burkhardt, Randy W.; Stopyro, Mark; Bartels, Andrew; Kramer, Eric; Bowler, Melvin C.; Cronin, Frederick A.; Soergel, Dirk W.; Petersen, Michael D.; Herzog, David P.; Raibley, Paul T.; Irons, Kevin S.; O'Hara, Timothy M.

    1997-01-01

    The Long Term Resource Monitoring Program (LTRMP) completed 2,653 collections of fishes from stratified random and permanently fixed sampling locations in six study reaches of the Upper Mississippi River System during 1994. Collection methods included day and night electrofishing, hoop netting, fyke netting (two net sizes), gill netting, seining, and trawling in select aquatic area classes. The six LTRMP study areas are Pools 4 (excluding Lake Pepin), 8, 13, and 26 of the Upper Mississippi River, and unimpounded reach of the Mississippi River near Cape Girardeau, Missouri, and the La Grange Pool of the Illinois River. A total of 61-79 fish species were detected in each study area. For each of the six LTRMP study areas, this report contains summaries of (1) sampling efforts in each combination of gear type and aquatic area class, (2) total catches of each species from each gear type, (3) mean catch-per-unit of gear effort statistics and standard errors for common species from each combination of aquatic area class and selected gear type, and (4) length distributions of common species from selected gear types.

  2. Assessing explicit error reporting in the narrative electronic medical record using keyword searching.

    PubMed

    Cao, Hui; Stetson, Peter; Hripcsak, George

    2003-01-01

    In this study, we assessed the explicit reporting of medical errors in the electronic record. We looked for cases in which the provider explicitly stated that he or she or another provider had committed an error. The advantage of the technique is that it is not limited to a specific type of error. Our goals were to 1) measure the rate at which medical errors were documented in medical records, and 2) characterize the types of errors that were reported.

  3. The decline and fall of Type II error rates

    Treesearch

    Steve Verrill; Mark Durst

    2005-01-01

    For general linear models with normally distributed random errors, the probability of a Type II error decreases exponentially as a function of sample size. This potentially rapid decline reemphasizes the importance of performing power calculations.

  4. Refractive Errors

    MedlinePlus

    ... and lens of your eye helps you focus. Refractive errors are vision problems that happen when the shape ... cornea, or aging of the lens. Four common refractive errors are Myopia, or nearsightedness - clear vision close up ...

  5. Restrictions on surgical resident shift length does not impact type of medical errors.

    PubMed

    Anderson, Jamie E; Goodman, Laura F; Jensen, Guy W; Salcedo, Edgardo S; Galante, Joseph M

    2017-05-15

    In 2011, resident duty hours were restricted in an attempt to improve patient safety and resident education. With the goal of reducing fatigue, shorter shift length leads to more patient handoffs, raising concerns about adverse effects on patient safety. This study seeks to determine whether differences in duty-hour restrictions influence types of errors made by residents. This is a nested retrospective cohort study at a surgery department in an academic medical center. During 2013-14, standard 2011 duty hours were in place for residents. In 2014-15, duty-hour restrictions at the study site were relaxed ("flexible") with no restrictions on shift length. We reviewed all morbidity and mortality submissions from July 1, 2013-June 30, 2015 and compared differences in types of errors between these periods. A total of 383 patients experienced adverse events, including 59 deaths (15.4%). Comparing standard versus flexible periods, there was no difference in mortality (15.7% versus 12.6%, P = 0.479) or complication rates (2.6% versus 2.5%, P = 0.696). There was no difference in types of errors between periods (P = 0.050-0.808). The most number of errors were due to cognitive failures (229, 59.6%), whereas the fewest number of errors were due to team failure (127, 33.2%). By subset, technical errors resulted in the highest number of errors (169, 44.1%). There were no differences between types of errors of cases that were nonelective, at night, or involving residents. Among adverse events reported in this departmental surgical morbidity and mortality, there were no differences in types of errors when resident duty hours were less restrictive. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Studies on image compression and image reconstruction

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Nori, Sekhar; Araj, A.

    1994-01-01

    During this six month period our works concentrated on three, somewhat different areas. We looked at and developed a number of error concealment schemes for use in a variety of video coding environments. This work is described in an accompanying (draft) Masters thesis. In the thesis we describe application of this techniques to the MPEG video coding scheme. We felt that the unique frame ordering approach used in the MPEG scheme would be a challenge to any error concealment/error recovery technique. We continued with our work in the vector quantization area. We have also developed a new type of vector quantizer, which we call a scan predictive vector quantization. The scan predictive VQ was tested on data processed at Goddard to approximate Landsat 7 HRMSI resolution and compared favorably with existing VQ techniques. A paper describing this work is included. The third area is concerned more with reconstruction than compression. While there is a variety of efficient lossless image compression schemes, they all have a common property that they use past data to encode future data. This is done either via taking differences, context modeling, or by building dictionaries. When encoding large images, this common property becomes a common flaw. When the user wishes to decode just a portion of the image, the requirement that the past history be available forces the decoding of a significantly larger portion of the image than desired by the user. Even with intelligent partitioning of the image dataset, the number of pixels decoded may be four times the number of pixels requested. We have developed an adaptive scanning strategy which can be used with any lossless compression scheme and which lowers the additional number of pixels to be decoded to about 7 percent of the number of pixels requested! A paper describing these results is included.

  7. New technologies in radiation therapy: ensuring patient safety, radiation safety and regulatory issues in radiation oncology.

    PubMed

    Amols, Howard I

    2008-11-01

    New technologies such as intensity modulated and image guided radiation therapy, computer controlled linear accelerators, record and verify systems, electronic charts, and digital imaging have revolutionized radiation therapy over the past 10-15 y. Quality assurance (QA) as historically practiced and as recommended in reports such as American Association of Physicists in Medicine Task Groups 40 and 53 needs to be updated to address the increasing complexity and computerization of radiotherapy equipment, and the increased quantity of data defining a treatment plan and treatment delivery. While new technology has reduced the probability of many types of medical events, seeing new types of errors caused by improper use of new technology, communication failures between computers, corrupted or erroneous computer data files, and "software bugs" are now being seen. The increased use of computed tomography, magnetic resonance, and positron emission tomography imaging has become routine for many types of radiotherapy treatment planning, and QA for imaging modalities is beyond the expertise of most radiotherapy physicists. Errors in radiotherapy rarely result solely from hardware failures. More commonly they are a combination of computer and human errors. The increased use of radiosurgery, hypofractionation, more complex intensity modulated treatment plans, image guided radiation therapy, and increasing financial pressures to treat more patients in less time will continue to fuel this reliance on high technology and complex computer software. Clinical practitioners and regulatory agencies are beginning to realize that QA for new technologies is a major challenge and poses dangers different in nature than what are historically familiar.

  8. ASCERTAINMENT OF ON-ROAD SAFETY ERRORS BASED ON VIDEO REVIEW

    PubMed Central

    Dawson, Jeffrey D.; Uc, Ergun Y.; Anderson, Steven W.; Dastrup, Elizabeth; Johnson, Amy M.; Rizzo, Matthew

    2011-01-01

    Summary Using an instrumented vehicle, we have studied several aspects of the on-road performance of healthy and diseased elderly drivers. One goal from such studies is to ascertain the type and frequency of driving safety errors. Because the judgment of such errors is somewhat subjective, we applied a taxonomy system of 15 general safety error categories and 76 specific safety error types. We also employed and trained professional driving instructors to review the video data of the on-road drives. In this report, we illustrate our rating system on a group of 111 drivers, ages 65 to 89. These drivers made errors in 13 of the 15 error categories, comprising 42 of the 76 error types. A mean (SD) of 35.8 (12.8) safety errors per drive were noted, with 2.1 (1.7) of them being judged as serious. Our methodology may be useful in applications such as intervention studies, and in longitudinal studies of changes in driving abilities in patients with declining cognitive ability. PMID:24273753

  9. Medication prescribing errors in the medical intensive care unit of Tikur Anbessa Specialized Hospital, Addis Ababa, Ethiopia.

    PubMed

    Sada, Oumer; Melkie, Addisu; Shibeshi, Workineh

    2015-09-16

    Medication errors (MEs) are important problems in all hospitalized populations, especially in intensive care unit (ICU). Little is known about the prevalence of medication prescribing errors in the ICU of hospitals in Ethiopia. The aim of this study was to assess medication prescribing errors in the ICU of Tikur Anbessa Specialized Hospital using retrospective cross-sectional analysis of patient cards and medication charts. About 220 patient charts were reviewed with a total of 1311 patient-days, and 882 prescription episodes. 359 MEs were detected; with prevalence of 40 per 100 orders. Common prescribing errors were omission errors 154 (42.89%), 101 (28.13%) wrong combination, 48 (13.37%) wrong abbreviation, 30 (8.36%) wrong dose, wrong frequency 18 (5.01%) and wrong indications 8 (2.23%). The present study shows that medication errors are common in medical ICU of Tikur Anbessa Specialized Hospital. These results suggest future targets of prevention strategies to reduce the rate of medication error.

  10. Differences among Job Positions Related to Communication Errors at Construction Sites

    NASA Astrophysics Data System (ADS)

    Takahashi, Akiko; Ishida, Toshiro

    In a previous study, we classified the communicatio n errors at construction sites as faulty intention and message pattern, inadequate channel pattern, and faulty comprehension pattern. This study seeks to evaluate the degree of risk of communication errors and to investigate differences among people in various job positions in perception of communication error risk . Questionnaires based on the previous study were a dministered to construction workers (n=811; 149 adminis trators, 208 foremen and 454 workers). Administrators evaluated all patterns of communication error risk equally. However, foremen and workers evaluated communication error risk differently in each pattern. The common contributing factors to all patterns wer e inadequate arrangements before work and inadequate confirmation. Some factors were common among patterns but other factors were particular to a specific pattern. To help prevent future accidents at construction sites, administrators should understand how people in various job positions perceive communication errors and propose human factors measures to prevent such errors.

  11. Fully Convolutional Networks for Ground Classification from LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Rizaldy, A.; Persello, C.; Gevaert, C. M.; Oude Elberink, S. J.

    2018-05-01

    Deep Learning has been massively used for image classification in recent years. The use of deep learning for ground classification from LIDAR point clouds has also been recently studied. However, point clouds need to be converted into an image in order to use Convolutional Neural Networks (CNNs). In state-of-the-art techniques, this conversion is slow because each point is converted into a separate image. This approach leads to highly redundant computation during conversion and classification. The goal of this study is to design a more efficient data conversion and ground classification. This goal is achieved by first converting the whole point cloud into a single image. The classification is then performed by a Fully Convolutional Network (FCN), a modified version of CNN designed for pixel-wise image classification. The proposed method is significantly faster than state-of-the-art techniques. On the ISPRS Filter Test dataset, it is 78 times faster for conversion and 16 times faster for classification. Our experimental analysis on the same dataset shows that the proposed method results in 5.22 % of total error, 4.10 % of type I error, and 15.07 % of type II error. Compared to the previous CNN-based technique and LAStools software, the proposed method reduces the total error and type I error (while type II error is slightly higher). The method was also tested on a very high point density LIDAR point clouds resulting in 4.02 % of total error, 2.15 % of type I error and 6.14 % of type II error.

  12. Electronic Health Record-Related Safety Concerns: A Cross-Sectional Survey of Electronic Health Record Users

    PubMed Central

    Pajunen, Tuuli; Saranto, Kaija; Lehtonen, Lasse

    2016-01-01

    Background The rapid expansion in the use of electronic health records (EHR) has increased the number of medical errors originating in health information systems (HIS). The sociotechnical approach helps in understanding risks in the development, implementation, and use of EHR and health information technology (HIT) while accounting for complex interactions of technology within the health care system. Objective This study addresses two important questions: (1) “which of the common EHR error types are associated with perceived high- and extreme-risk severity ratings among EHR users?”, and (2) “which variables are associated with high- and extreme-risk severity ratings?” Methods This study was a quantitative, non-experimental, descriptive study of EHR users. We conducted a cross-sectional web-based questionnaire study at the largest hospital district in Finland. Statistical tests included the reliability of the summative scales tested with Cronbach’s alpha. Logistic regression served to assess the association of the independent variables to each of the eight risk factors examined. Results A total of 2864 eligible respondents provided the final data. Almost half of the respondents reported a high level of risk related to the error type “extended EHR unavailability”. The lowest overall risk level was associated with “selecting incorrectly from a list of items”. In multivariate analyses, profession and clinical unit proved to be the strongest predictors for high perceived risk. Physicians perceived risk levels to be the highest (P<.001 in six of eight error types), while emergency departments, operating rooms, and procedure units were associated with higher perceived risk levels (P<.001 in four of eight error types). Previous participation in eLearning courses on EHR-use was associated with lower risk for some of the risk factors. Conclusions Based on a large number of Finnish EHR users in hospitals, this study indicates that HIT safety hazards should be taken very seriously, particularly in operating rooms, procedure units, emergency departments, and intensive care units/critical care units. Health care organizations should use proactive and systematic assessments of EHR risks before harmful events occur. An EHR training program should be compulsory for all EHR users in order to address EHR safety concerns resulting from the failure to use HIT appropriately. PMID:27154599

  13. Modeling and Control of a Tailsitter with a Ducted Fan

    NASA Astrophysics Data System (ADS)

    Argyle, Matthew Elliott

    There are two traditional aircraft categories: fixed-wing which have a long endurance and a high cruise airspeed and rotorcraft which can take-off and land vertically. The tailsitter is a type of aircraft that has the strengths of both platforms, with no additional mechanical complexity, because it takes off and lands vertically on its tail and can transition the entire aircraft horizontally into high-speed flight. In this dissertation, we develop the entire control system for a tailsitter with a ducted fan. The standard method to compute the quaternion-based attitude error does not generate ideal trajectories for a hovering tailsitter for some situations. In addition, the only approach in the literature to mitigate this breaks down for large attitude errors. We develop an alternative quaternion-based error method which generates better trajectories than the standard approach and can handle large errors. We also derive a hybrid backstepping controller with almost global asymptotic stability based on this error method. Many common altitude and airspeed control schemes for a fixed-wing airplane assume that the altitude and airspeed dynamics are decoupled which leads to errors. The Total Energy Control System (TECS) is an approach that controls the altitude and airspeed by manipulating the total energy rate and energy distribution rate, of the aircraft, in a manner which accounts for the dynamic coupling. In this dissertation, a nonlinear controller, which can handle inaccurate thrust and drag models, based on the TECS principles is derived. Simulation results show that the nonlinear controller has better performance than the standard PI TECS control schemes. Most constant altitude transitions are accomplished by generating an optimal trajectory, and potentially actuator inputs, based on a high fidelity model of the aircraft. While there are several approaches to mitigate the effects of modeling errors, these do not fully remove the accurate model requirement. In this dissertation, we develop two different approaches that can achieve near constant altitude transitions for some types of aircraft. The first method, based on multiple LQR controllers, requires a high fidelity model of the aircraft. However, the second method, based on the energy along the body axes, requires almost no aerodynamic information.

  14. A priori discretization error metrics for distributed hydrologic modeling applications

    NASA Astrophysics Data System (ADS)

    Liu, Hongli; Tolson, Bryan A.; Craig, James R.; Shafii, Mahyar

    2016-12-01

    Watershed spatial discretization is an important step in developing a distributed hydrologic model. A key difficulty in the spatial discretization process is maintaining a balance between the aggregation-induced information loss and the increase in computational burden caused by the inclusion of additional computational units. Objective identification of an appropriate discretization scheme still remains a challenge, in part because of the lack of quantitative measures for assessing discretization quality, particularly prior to simulation. This study proposes a priori discretization error metrics to quantify the information loss of any candidate discretization scheme without having to run and calibrate a hydrologic model. These error metrics are applicable to multi-variable and multi-site discretization evaluation and provide directly interpretable information to the hydrologic modeler about discretization quality. The first metric, a subbasin error metric, quantifies the routing information loss from discretization, and the second, a hydrological response unit (HRU) error metric, improves upon existing a priori metrics by quantifying the information loss due to changes in land cover or soil type property aggregation. The metrics are straightforward to understand and easy to recode. Informed by the error metrics, a two-step discretization decision-making approach is proposed with the advantage of reducing extreme errors and meeting the user-specified discretization error targets. The metrics and decision-making approach are applied to the discretization of the Grand River watershed in Ontario, Canada. Results show that information loss increases as discretization gets coarser. Moreover, results help to explain the modeling difficulties associated with smaller upstream subbasins since the worst discretization errors and highest error variability appear in smaller upstream areas instead of larger downstream drainage areas. Hydrologic modeling experiments under candidate discretization schemes validate the strong correlation between the proposed discretization error metrics and hydrologic simulation responses. Discretization decision-making results show that the common and convenient approach of making uniform discretization decisions across the watershed performs worse than the proposed non-uniform discretization approach in terms of preserving spatial heterogeneity under the same computational cost.

  15. Article Errors in the English Writing of Saudi EFL Preparatory Year Students

    ERIC Educational Resources Information Center

    Alhaisoni, Eid; Gaudel, Daya Ram; Al-Zuoud, Khalid M.

    2017-01-01

    This study aims at providing a comprehensive account of the types of errors produced by Saudi EFL students enrolled in the preparatory year programe in their use of articles, based on the Surface Structure Taxonomies (SST) of errors. The study describes the types, frequency and sources of the definite and indefinite article errors in writing…

  16. Identifying Novice Student Programming Misconceptions and Errors from Summative Assessments

    ERIC Educational Resources Information Center

    Veerasamy, Ashok Kumar; D'Souza, Daryl; Laakso, Mikko-Jussi

    2016-01-01

    This article presents a study aimed at examining the novice student answers in an introductory programming final e-exam to identify misconceptions and types of errors. Our study used the Delphi concept inventory to identify student misconceptions and skill, rule, and knowledge-based errors approach to identify the types of errors made by novices…

  17. Understanding Problem-Solving Errors by Students with Learning Disabilities in Standards-Based and Traditional Curricula

    ERIC Educational Resources Information Center

    Bouck, Emily C.; Bouck, Mary K.; Joshi, Gauri S.; Johnson, Linley

    2016-01-01

    Students with learning disabilities struggle with word problems in mathematics classes. Understanding the type of errors students make when working through such mathematical problems can further describe student performance and highlight student difficulties. Through the use of error codes, researchers analyzed the type of errors made by 14 sixth…

  18. Writing errors as a result of frontal dysfunction in Japanese patients with amyotrophic lateral sclerosis.

    PubMed

    Tsuji-Akimoto, Sachiko; Hamada, Shinsuke; Yabe, Ichiro; Tamura, Itaru; Otsuki, Mika; Kobashi, Syoji; Sasaki, Hidenao

    2010-12-01

    Loss of communication is a critical problem for advanced amyotrophic lateral sclerosis (ALS) patients. This loss of communication is mainly caused by severe dysarthria and disability of the dominant hand. However, reports show that about 50% of ALS patients have mild cognitive dysfunction, and there are a considerable number of case reports on Japanese ALS patients with agraphia. To clarify writing disabilities in non-demented ALS patients, eighteen non-demented ALS patients and 16 controls without neurological disorders were examined for frontal cognitive function and writing ability. To assess writing errors statistically, we scored them on their composition ability with the original writing error index (WEI). The ALS and control groups did not differ significantly with regard to age, years of education, or general cognitive level. Two patients could not write a letter because of disability of the dominant hand. The WEI and results of picture arrangement tests indicated significant impairment in the ALS patients. Auditory comprehension (Western Aphasia Battery; WAB IIC) and kanji dictation also showed mild impairment. Patients' writing errors consisted of both syntactic and letter-writing mistakes. Omission, substitution, displacement, and inappropriate placement of the phonic marks of kana were observed; these features have often been reported in Japanese patients with agraphia resulted from a frontal lobe lesion. The most frequent type of error was an omission of kana, the next most common was a missing subject. Writing errors might be a specific deficit for some non-demented ALS patients.

  19. Type I error probabilities based on design-stage strategies with applications to noninferiority trials.

    PubMed

    Rothmann, Mark

    2005-01-01

    When testing the equality of means from two different populations, a t-test or large sample normal test tend to be performed. For these tests, when the sample size or design for the second sample is dependent on the results of the first sample, the type I error probability is altered for each specific possibility in the null hypothesis. We will examine the impact on the type I error probabilities for two confidence interval procedures and procedures using test statistics when the design for the second sample or experiment is dependent on the results from the first sample or experiment (or series of experiments). Ways for controlling a desired maximum type I error probability or a desired type I error rate will be discussed. Results are applied to the setting of noninferiority comparisons in active controlled trials where the use of a placebo is unethical.

  20. ROSE::FTTransform - A Source-to-Source Translation Framework for Exascale Fault-Tolerance Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lidman, J; Quinlan, D; Liao, C

    2012-03-26

    Exascale computing systems will require sufficient resilience to tolerate numerous types of hardware faults while still assuring correct program execution. Such extreme-scale machines are expected to be dominated by processors driven at lower voltages (near the minimum 0.5 volts for current transistors). At these voltage levels, the rate of transient errors increases dramatically due to the sensitivity to transient and geographically localized voltage drops on parts of the processor chip. To achieve power efficiency, these processors are likely to be streamlined and minimal, and thus they cannot be expected to handle transient errors entirely in hardware. Here we present anmore » open, compiler-based framework to automate the armoring of High Performance Computing (HPC) software to protect it from these types of transient processor errors. We develop an open infrastructure to support research work in this area, and we define tools that, in the future, may provide more complete automated and/or semi-automated solutions to support software resiliency on future exascale architectures. Results demonstrate that our approach is feasible, pragmatic in how it can be separated from the software development process, and reasonably efficient (0% to 30% overhead for the Jacobi iteration on common hardware; and 20%, 40%, 26%, and 2% overhead for a randomly selected subset of benchmarks from the Livermore Loops [1]).« less

  1. Teaching Common Errors in Applying a Procedure.

    ERIC Educational Resources Information Center

    Marcone, Stephen; Reigeluth, Charles M.

    1988-01-01

    Discusses study that investigated whether or not the teaching of matched examples and nonexamples in the form of common errors could improve student performance in undergraduate music theory courses. Highlights include hypotheses tested, pretests and posttests, and suggestions for further research with different age groups. (19 references)…

  2. Error reporting from the da Vinci surgical system in robotic surgery: A Canadian multispecialty experience at a single academic centre

    PubMed Central

    Rajih, Emad; Tholomier, Côme; Cormier, Beatrice; Samouëlian, Vanessa; Warkus, Thomas; Liberman, Moishe; Widmer, Hugues; Lattouf, Jean-Baptiste; Alenizi, Abdullah M.; Meskawi, Malek; Valdivieso, Roger; Hueber, Pierre-Alain; Karakewicz, Pierre I.; El-Hakim, Assaad; Zorn, Kevin C.

    2017-01-01

    Introduction The goal of the study is to evaluate and report on the third-generation da Vinci surgical (Si) system malfunctions. Methods A total of 1228 robotic surgeries were performed between January 2012 and December 2015 at our academic centre. All cases were performed by using a single, dual console, four-arm, da Vinci Si robot system. The three specialties included urology, gynecology, and thoracic surgery. Studied outcomes included the robotic surgical error types, immediate consequences, and operative side effects. Error rate trend with time was also examined. Results Overall robotic malfunctions were documented on the da Vinci Si systems event log in 4.97% (61/1228) of the cases. The most common error was related to pressure sensors in the robotic arms indicating out of limit output. This recoverable fault was noted in 2.04% (25/1228) of cases. Other errors included unrecoverable electronic communication-related in 1.06% (13/1228) of cases, failed encoder error in 0.57% (7/1228), illuminator-related in 0.33% (4/1228), faulty switch in 0.24% (3/1228), battery-related failures in 0.24% (3/1228), and software/hardware error in 0.08% (1/1228) of cases. Surgical delay was reported only in one patient. No conversion to either open or laparoscopic occurred secondary to robotic malfunctions. In 2015, the incidence of robotic error rose to 1.71% (21/1228) from 0.81% (10/1228) in 2014. Conclusions Robotic malfunction is not infrequent in the current era of robotic surgery in various surgical subspecialties, but rarely consequential. Their seldom occurrence does not seem to affect patient safety or surgical outcome. PMID:28503234

  3. Error reporting from the da Vinci surgical system in robotic surgery: A Canadian multispecialty experience at a single academic centre.

    PubMed

    Rajih, Emad; Tholomier, Côme; Cormier, Beatrice; Samouëlian, Vanessa; Warkus, Thomas; Liberman, Moishe; Widmer, Hugues; Lattouf, Jean-Baptiste; Alenizi, Abdullah M; Meskawi, Malek; Valdivieso, Roger; Hueber, Pierre-Alain; Karakewicz, Pierre I; El-Hakim, Assaad; Zorn, Kevin C

    2017-05-01

    The goal of the study is to evaluate and report on the third-generation da Vinci surgical (Si) system malfunctions. A total of 1228 robotic surgeries were performed between January 2012 and December 2015 at our academic centre. All cases were performed by using a single, dual console, four-arm, da Vinci Si robot system. The three specialties included urology, gynecology, and thoracic surgery. Studied outcomes included the robotic surgical error types, immediate consequences, and operative side effects. Error rate trend with time was also examined. Overall robotic malfunctions were documented on the da Vinci Si systems event log in 4.97% (61/1228) of the cases. The most common error was related to pressure sensors in the robotic arms indicating out of limit output. This recoverable fault was noted in 2.04% (25/1228) of cases. Other errors included unrecoverable electronic communication-related in 1.06% (13/1228) of cases, failed encoder error in 0.57% (7/1228), illuminator-related in 0.33% (4/1228), faulty switch in 0.24% (3/1228), battery-related failures in 0.24% (3/1228), and software/hardware error in 0.08% (1/1228) of cases. Surgical delay was reported only in one patient. No conversion to either open or laparoscopic occurred secondary to robotic malfunctions. In 2015, the incidence of robotic error rose to 1.71% (21/1228) from 0.81% (10/1228) in 2014. Robotic malfunction is not infrequent in the current era of robotic surgery in various surgical subspecialties, but rarely consequential. Their seldom occurrence does not seem to affect patient safety or surgical outcome.

  4. [Diagnostic Errors in Medicine].

    PubMed

    Buser, Claudia; Bankova, Andriyana

    2015-12-09

    The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors.

  5. Critical Thinking in Critical Care: Five Strategies to Improve Teaching and Learning in the Intensive Care Unit.

    PubMed

    Hayes, Margaret M; Chatterjee, Souvik; Schwartzstein, Richard M

    2017-04-01

    Critical thinking, the capacity to be deliberate about thinking, is increasingly the focus of undergraduate medical education, but is not commonly addressed in graduate medical education. Without critical thinking, physicians, and particularly residents, are prone to cognitive errors, which can lead to diagnostic errors, especially in a high-stakes environment such as the intensive care unit. Although challenging, critical thinking skills can be taught. At this time, there is a paucity of data to support an educational gold standard for teaching critical thinking, but we believe that five strategies, routed in cognitive theory and our personal teaching experiences, provide an effective framework to teach critical thinking in the intensive care unit. The five strategies are: make the thinking process explicit by helping learners understand that the brain uses two cognitive processes: type 1, an intuitive pattern-recognizing process, and type 2, an analytic process; discuss cognitive biases, such as premature closure, and teach residents to minimize biases by expressing uncertainty and keeping differentials broad; model and teach inductive reasoning by utilizing concept and mechanism maps and explicitly teach how this reasoning differs from the more commonly used hypothetico-deductive reasoning; use questions to stimulate critical thinking: "how" or "why" questions can be used to coach trainees and to uncover their thought processes; and assess and provide feedback on learner's critical thinking. We believe these five strategies provide practical approaches for teaching critical thinking in the intensive care unit.

  6. Critical Thinking in Critical Care: Five Strategies to Improve Teaching and Learning in the Intensive Care Unit

    PubMed Central

    Chatterjee, Souvik; Schwartzstein, Richard M.

    2017-01-01

    Critical thinking, the capacity to be deliberate about thinking, is increasingly the focus of undergraduate medical education, but is not commonly addressed in graduate medical education. Without critical thinking, physicians, and particularly residents, are prone to cognitive errors, which can lead to diagnostic errors, especially in a high-stakes environment such as the intensive care unit. Although challenging, critical thinking skills can be taught. At this time, there is a paucity of data to support an educational gold standard for teaching critical thinking, but we believe that five strategies, routed in cognitive theory and our personal teaching experiences, provide an effective framework to teach critical thinking in the intensive care unit. The five strategies are: make the thinking process explicit by helping learners understand that the brain uses two cognitive processes: type 1, an intuitive pattern-recognizing process, and type 2, an analytic process; discuss cognitive biases, such as premature closure, and teach residents to minimize biases by expressing uncertainty and keeping differentials broad; model and teach inductive reasoning by utilizing concept and mechanism maps and explicitly teach how this reasoning differs from the more commonly used hypothetico-deductive reasoning; use questions to stimulate critical thinking: “how” or “why” questions can be used to coach trainees and to uncover their thought processes; and assess and provide feedback on learner’s critical thinking. We believe these five strategies provide practical approaches for teaching critical thinking in the intensive care unit. PMID:28157389

  7. Assessment and Calibration of Ultrasonic Measurement Errors in Estimating Weathering Index of Stone Cultural Heritage

    NASA Astrophysics Data System (ADS)

    Lee, Y.; Keehm, Y.

    2011-12-01

    Estimating the degree of weathering in stone cultural heritage, such as pagodas and statues is very important to plan conservation and restoration. The ultrasonic measurement is one of commonly-used techniques to evaluate weathering index of stone cultual properties, since it is easy to use and non-destructive. Typically we use a portable ultrasonic device, PUNDIT with exponential sensors. However, there are many factors to cause errors in measurements such as operators, sensor layouts or measurement directions. In this study, we carried out variety of measurements with different operators (male and female), different sensor layouts (direct and indirect), and sensor directions (anisotropy). For operators bias, we found that there were not significant differences by the operator's sex, while the pressure an operator exerts can create larger error in measurements. Calibrating with a standard sample for each operator is very essential in this case. For the sensor layout, we found that the indirect measurement (commonly used for cultural properties, since the direct measurement is difficult in most cases) gives lower velocity than the real one. We found that the correction coefficient is slightly different for different types of rocks: 1.50 for granite and sandstone and 1.46 for marble. From the sensor directions, we found that many rocks have slight anisotropy in their ultrasonic velocity measurement, though they are considered isotropic in macroscopic scale. Thus averaging four different directional measurement (0°, 45°, 90°, 135°) gives much less errors in measurements (the variance is 2-3 times smaller). In conclusion, we reported the error in ultrasonic meaurement of stone cultural properties by various sources quantitatively and suggested the amount of correction and procedures to calibrate the measurements. Acknowledgement: This study, which forms a part of the project, has been achieved with the support of national R&D project, which has been hosted by National Research Institute of Cultural Heritage of Cultural Heritage Administration(No. NRICH-1107-B01F).

  8. Item Discrimination and Type I Error in the Detection of Differential Item Functioning

    ERIC Educational Resources Information Center

    Li, Yanju; Brooks, Gordon P.; Johanson, George A.

    2012-01-01

    In 2009, DeMars stated that when impact exists there will be Type I error inflation, especially with larger sample sizes and larger discrimination parameters for items. One purpose of this study is to present the patterns of Type I error rates using Mantel-Haenszel (MH) and logistic regression (LR) procedures when the mean ability between the…

  9. Improving the prediction of going concern of Taiwanese listed companies using a hybrid of LASSO with data mining techniques.

    PubMed

    Goo, Yeung-Ja James; Chi, Der-Jang; Shen, Zong-De

    2016-01-01

    The purpose of this study is to establish rigorous and reliable going concern doubt (GCD) prediction models. This study first uses the least absolute shrinkage and selection operator (LASSO) to select variables and then applies data mining techniques to establish prediction models, such as neural network (NN), classification and regression tree (CART), and support vector machine (SVM). The samples of this study include 48 GCD listed companies and 124 NGCD (non-GCD) listed companies from 2002 to 2013 in the TEJ database. We conduct fivefold cross validation in order to identify the prediction accuracy. According to the empirical results, the prediction accuracy of the LASSO-NN model is 88.96 % (Type I error rate is 12.22 %; Type II error rate is 7.50 %), the prediction accuracy of the LASSO-CART model is 88.75 % (Type I error rate is 13.61 %; Type II error rate is 14.17 %), and the prediction accuracy of the LASSO-SVM model is 89.79 % (Type I error rate is 10.00 %; Type II error rate is 15.83 %).

  10. The ADRA2B gene in the production of false memories for affective information in healthy female volunteers.

    PubMed

    Fairfield, Beth; Mammarella, Nicola; Di Domenico, Alberto; D'Aurora, Marco; Stuppia, Liborio; Gatta, Valentina

    2017-08-30

    False memories are common memory distortions in everyday life and seem to increase with affectively connoted complex information. In line with recent studies showing a significant interaction between the noradrenergic system and emotional memory, we investigated whether healthy volunteer carriers of the deletion variant of the ADRA2B gene that codes for the α2b-adrenergic receptor are more prone to false memories than non-carriers. In this study, we collected genotype data from 212 healthy female volunteers; 91 ADRA2B carriers and 121 non-carriers. To assess gene effects on false memories for affective information, factorial mixed model analysis of variances (ANOVAs) were conducted with genotype as the between-subjects factor and type of memory error as the within-subjects factor. We found that although carriers and non-carriers made comparable numbers of false memory errors, they showed differences in the direction of valence biases, especially for inferential causal errors. Specifically, carriers produced fewer causal false memory errors for scripts with a negative outcome, whereas non-carriers showed a more general emotional effect and made fewer causal errors with both positive and negative outcomes. These findings suggest that putatively higher levels of noradrenaline in deletion carriers may enhance short-term consolidation of negative information and lead to fewer memory distortions when facing negative events. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Accounting for measurement error in biomarker data and misclassification of subtypes in the analysis of tumor data

    PubMed Central

    Nevo, Daniel; Zucker, David M.; Tamimi, Rulla M.; Wang, Molin

    2017-01-01

    A common paradigm in dealing with heterogeneity across tumors in cancer analysis is to cluster the tumors into subtypes using marker data on the tumor, and then to analyze each of the clusters separately. A more specific target is to investigate the association between risk factors and specific subtypes and to use the results for personalized preventive treatment. This task is usually carried out in two steps–clustering and risk factor assessment. However, two sources of measurement error arise in these problems. The first is the measurement error in the biomarker values. The second is the misclassification error when assigning observations to clusters. We consider the case with a specified set of relevant markers and propose a unified single-likelihood approach for normally distributed biomarkers. As an alternative, we consider a two-step procedure with the tumor type misclassification error taken into account in the second-step risk factor analysis. We describe our method for binary data and also for survival analysis data using a modified version of the Cox model. We present asymptotic theory for the proposed estimators. Simulation results indicate that our methods significantly lower the bias with a small price being paid in terms of variance. We present an analysis of breast cancer data from the Nurses’ Health Study to demonstrate the utility of our method. PMID:27558651

  12. Prevalence of teen driver errors leading to serious motor vehicle crashes.

    PubMed

    Curry, Allison E; Hafetz, Jessica; Kallan, Michael J; Winston, Flaura K; Durbin, Dennis R

    2011-07-01

    Motor vehicle crashes are the leading cause of adolescent deaths. Programs and policies should target the most common and modifiable reasons for crashes. We estimated the frequency of critical reasons for crashes involving teen drivers, and examined in more depth specific teen driver errors. The National Highway Traffic Safety Administration's (NHTSA) National Motor Vehicle Crash Causation Survey collected data at the scene of a nationally representative sample of 5470 serious crashes between 7/05 and 12/07. NHTSA researchers assigned a single driver, vehicle, or environmental factor as the critical reason for the event immediately leading to each crash. We analyzed crashes involving 15-18 year old drivers. 822 teen drivers were involved in 795 serious crashes, representing 335,667 teens in 325,291 crashes. Driver error was by far the most common reason for crashes (95.6%), as opposed to vehicle or environmental factors. Among crashes with a driver error, a teen made the error 79.3% of the time (75.8% of all teen-involved crashes). Recognition errors (e.g., inadequate surveillance, distraction) accounted for 46.3% of all teen errors, followed by decision errors (e.g., following too closely, too fast for conditions) (40.1%) and performance errors (e.g., loss of control) (8.0%). Inadequate surveillance, driving too fast for conditions, and distracted driving together accounted for almost half of all crashes. Aggressive driving behavior, drowsy driving, and physical impairments were less commonly cited as critical reasons. Males and females had similar proportions of broadly classified errors, although females were specifically more likely to make inadequate surveillance errors. Our findings support prioritization of interventions targeting driver distraction and surveillance and hazard awareness training. Copyright © 2010 Elsevier Ltd. All rights reserved.

  13. Out-of-This-World Calculations

    ERIC Educational Resources Information Center

    Kalb, Kristina S.; Gravett, Julie M.

    2012-01-01

    By following learned rules rather than reasoning, students often fall into common error patterns, something every experienced teacher has observed in the classroom. In their effort to circumvent the developing common error patterns of their students, the authors decided to supplement their math text with two weeklong investigations. The first was…

  14. Ten common errors beginning substance abuse workers make in group treatment.

    PubMed

    Greif, G L

    1996-01-01

    Beginning therapists sometimes make mistakes when working with substance abusers in groups. This article discusses ten common errors that the author has observed. Five center on the therapist's approach and five center on the nuts and bolts of group leadership. Suggestions are offered for how to avoid them.

  15. A theory of cerebellar cortex and adaptive motor control based on two types of universal function approximation capability.

    PubMed

    Fujita, Masahiko

    2016-03-01

    Lesions of the cerebellum result in large errors in movements. The cerebellum adaptively controls the strength and timing of motor command signals depending on the internal and external environments of movements. The present theory describes how the cerebellar cortex can control signals for accurate and timed movements. A model network of the cerebellar Golgi and granule cells is shown to be equivalent to a multiple-input (from mossy fibers) hierarchical neural network with a single hidden layer of threshold units (granule cells) that receive a common recurrent inhibition (from a Golgi cell). The weighted sum of the hidden unit signals (Purkinje cell output) is theoretically analyzed regarding the capability of the network to perform two types of universal function approximation. The hidden units begin firing as the excitatory inputs exceed the recurrent inhibition. This simple threshold feature leads to the first approximation theory, and the network final output can be any continuous function of the multiple inputs. When the input is constant, this output becomes stationary. However, when the recurrent unit activity is triggered to decrease or the recurrent inhibition is triggered to increase through a certain mechanism (metabotropic modulation or extrasynaptic spillover), the network can generate any continuous signals for a prolonged period of change in the activity of recurrent signals, as the second approximation theory shows. By incorporating the cerebellar capability of two such types of approximations to a motor system, in which learning proceeds through repeated movement trials with accompanying corrections, accurate and timed responses for reaching the target can be adaptively acquired. Simple models of motor control can solve the motor error vs. sensory error problem, as well as the structural aspects of credit (or error) assignment problem. Two physiological experiments are proposed for examining the delay and trace conditioning of eyelid responses, as well as saccade adaptation, to investigate this novel idea of cerebellar processing. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Group sequential designs for stepped-wedge cluster randomised trials

    PubMed Central

    Grayling, Michael J; Wason, James MS; Mander, Adrian P

    2017-01-01

    Background/Aims: The stepped-wedge cluster randomised trial design has received substantial attention in recent years. Although various extensions to the original design have been proposed, no guidance is available on the design of stepped-wedge cluster randomised trials with interim analyses. In an individually randomised trial setting, group sequential methods can provide notable efficiency gains and ethical benefits. We address this by discussing how established group sequential methodology can be adapted for stepped-wedge designs. Methods: Utilising the error spending approach to group sequential trial design, we detail the assumptions required for the determination of stepped-wedge cluster randomised trials with interim analyses. We consider early stopping for efficacy, futility, or efficacy and futility. We describe first how this can be done for any specified linear mixed model for data analysis. We then focus on one particular commonly utilised model and, using a recently completed stepped-wedge cluster randomised trial, compare the performance of several designs with interim analyses to the classical stepped-wedge design. Finally, the performance of a quantile substitution procedure for dealing with the case of unknown variance is explored. Results: We demonstrate that the incorporation of early stopping in stepped-wedge cluster randomised trial designs could reduce the expected sample size under the null and alternative hypotheses by up to 31% and 22%, respectively, with no cost to the trial’s type-I and type-II error rates. The use of restricted error maximum likelihood estimation was found to be more important than quantile substitution for controlling the type-I error rate. Conclusion: The addition of interim analyses into stepped-wedge cluster randomised trials could help guard against time-consuming trials conducted on poor performing treatments and also help expedite the implementation of efficacious treatments. In future, trialists should consider incorporating early stopping of some kind into stepped-wedge cluster randomised trials according to the needs of the particular trial. PMID:28653550

  17. Development of Refractive Errors-What Can We Learn From Inherited Retinal Dystrophies?

    PubMed

    Hendriks, Michelle; Verhoeven, Virginie J M; Buitendijk, Gabriëlle H S; Polling, Jan Roelof; Meester-Smoor, Magda A; Hofman, Albert; Kamermans, Maarten; Ingeborgh van den Born, L; Klaver, Caroline C W

    2017-10-01

    It is unknown which retinal cells are involved in the retina-to-sclera signaling cascade causing myopia. As inherited retinal dystrophies (IRD) are characterized by dysfunction of a single retinal cell type and have a high risk of refractive errors, a study investigating the affected cell type, causal gene, and refractive error in IRDs may provide insight herein. Case-control study. Study Population: Total of 302 patients with IRD from 2 ophthalmogenetic centers in the Netherlands. Reference Population: Population-based Rotterdam Study-III and Erasmus Rucphen Family Study (N = 5550). Distributions and mean spherical equivalent (SE) were calculated for main affected cell type and causal gene; and risks of myopia and hyperopia were evaluated using logistic regression. Bipolar cell-related dystrophies were associated with the highest risk of SE high myopia 239.7; odds ratio (OR) mild hyperopia 263.2, both P < .0001; SE -6.86 diopters (D) (standard deviation [SD] 6.38), followed by cone-dominated dystrophies (OR high myopia 19.5, P < .0001; OR high hyperopia 10.7, P = .033; SE -3.10 D [SD 4.49]); rod dominated dystrophies (OR high myopia 10.1, P < .0001; OR high hyperopia 9.7, P = .001; SE -2.27 D [SD 4.65]), and retinal pigment epithelium (RPE)-related dystrophies (OR low myopia 2.7; P = .001; OR high hyperopia 5.8; P = .025; SE -0.10 D [SD 3.09]). Mutations in RPGR (SE -7.63 D [SD 3.31]) and CACNA1F (SE -5.33 D [SD 3.10]) coincided with the highest degree of myopia and in CABP4 (SE 4.81 D [SD 0.35]) with the highest degree of hyperopia. Refractive errors, in particular myopia, are common in IRD. The bipolar synapse and the inner and outer segments of the photoreceptor may serve as critical sites for myopia development. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Group sequential designs for stepped-wedge cluster randomised trials.

    PubMed

    Grayling, Michael J; Wason, James Ms; Mander, Adrian P

    2017-10-01

    The stepped-wedge cluster randomised trial design has received substantial attention in recent years. Although various extensions to the original design have been proposed, no guidance is available on the design of stepped-wedge cluster randomised trials with interim analyses. In an individually randomised trial setting, group sequential methods can provide notable efficiency gains and ethical benefits. We address this by discussing how established group sequential methodology can be adapted for stepped-wedge designs. Utilising the error spending approach to group sequential trial design, we detail the assumptions required for the determination of stepped-wedge cluster randomised trials with interim analyses. We consider early stopping for efficacy, futility, or efficacy and futility. We describe first how this can be done for any specified linear mixed model for data analysis. We then focus on one particular commonly utilised model and, using a recently completed stepped-wedge cluster randomised trial, compare the performance of several designs with interim analyses to the classical stepped-wedge design. Finally, the performance of a quantile substitution procedure for dealing with the case of unknown variance is explored. We demonstrate that the incorporation of early stopping in stepped-wedge cluster randomised trial designs could reduce the expected sample size under the null and alternative hypotheses by up to 31% and 22%, respectively, with no cost to the trial's type-I and type-II error rates. The use of restricted error maximum likelihood estimation was found to be more important than quantile substitution for controlling the type-I error rate. The addition of interim analyses into stepped-wedge cluster randomised trials could help guard against time-consuming trials conducted on poor performing treatments and also help expedite the implementation of efficacious treatments. In future, trialists should consider incorporating early stopping of some kind into stepped-wedge cluster randomised trials according to the needs of the particular trial.

  19. Systematic Errors in an Air Track Experiment.

    ERIC Educational Resources Information Center

    Ramirez, Santos A.; Ham, Joe S.

    1990-01-01

    Errors found in a common physics experiment to measure acceleration resulting from gravity using a linear air track are investigated. Glider position at release and initial velocity are shown to be sources of systematic error. (CW)

  20. Goal-oriented explicit residual-type error estimates in XFEM

    NASA Astrophysics Data System (ADS)

    Rüter, Marcus; Gerasimov, Tymofiy; Stein, Erwin

    2013-08-01

    A goal-oriented a posteriori error estimator is derived to control the error obtained while approximately evaluating a quantity of engineering interest, represented in terms of a given linear or nonlinear functional, using extended finite elements of Q1 type. The same approximation method is used to solve the dual problem as required for the a posteriori error analysis. It is shown that for both problems to be solved numerically the same singular enrichment functions can be used. The goal-oriented error estimator presented can be classified as explicit residual type, i.e. the residuals of the approximations are used directly to compute upper bounds on the error of the quantity of interest. This approach therefore extends the explicit residual-type error estimator for classical energy norm error control as recently presented in Gerasimov et al. (Int J Numer Meth Eng 90:1118-1155, 2012a). Without loss of generality, the a posteriori error estimator is applied to the model problem of linear elastic fracture mechanics. Thus, emphasis is placed on the fracture criterion, here the J-integral, as the chosen quantity of interest. Finally, various illustrative numerical examples are presented where, on the one hand, the error estimator is compared to its finite element counterpart and, on the other hand, improved enrichment functions, as introduced in Gerasimov et al. (2012b), are discussed.

  1. Small perturbations in a finger-tapping task reveal inherent nonlinearities of the underlying error correction mechanism.

    PubMed

    Bavassi, M Luz; Tagliazucchi, Enzo; Laje, Rodrigo

    2013-02-01

    Time processing in the few hundred milliseconds range is involved in the human skill of sensorimotor synchronization, like playing music in an ensemble or finger tapping to an external beat. In finger tapping, a mechanistic explanation in biologically plausible terms of how the brain achieves synchronization is still missing despite considerable research. In this work we show that nonlinear effects are important for the recovery of synchronization following a perturbation (a step change in stimulus period), even for perturbation magnitudes smaller than 10% of the period, which is well below the amount of perturbation needed to evoke other nonlinear effects like saturation. We build a nonlinear mathematical model for the error correction mechanism and test its predictions, and further propose a framework that allows us to unify the description of the three common types of perturbations. While previous authors have used two different model mechanisms for fitting different perturbation types, or have fitted different parameter value sets for different perturbation magnitudes, we propose the first unified description of the behavior following all perturbation types and magnitudes as the dynamical response of a compound model with fixed terms and a single set of parameter values. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. When do latent class models overstate accuracy for diagnostic and other classifiers in the absence of a gold standard?

    PubMed

    Spencer, Bruce D

    2012-06-01

    Latent class models are increasingly used to assess the accuracy of medical diagnostic tests and other classifications when no gold standard is available and the true state is unknown. When the latent class is treated as the true class, the latent class models provide measures of components of accuracy including specificity and sensitivity and their complements, type I and type II error rates. The error rates according to the latent class model differ from the true error rates, however, and empirical comparisons with a gold standard suggest the true error rates often are larger. We investigate conditions under which the true type I and type II error rates are larger than those provided by the latent class models. Results from Uebersax (1988, Psychological Bulletin 104, 405-416) are extended to accommodate random effects and covariates affecting the responses. The results are important for interpreting the results of latent class analyses. An error decomposition is presented that incorporates an error component from invalidity of the latent class model. © 2011, The International Biometric Society.

  3. Error identification, disclosure, and reporting: practice patterns of three emergency medicine provider types.

    PubMed

    Hobgood, Cherri; Xie, Jipan; Weiner, Bryan; Hooker, James

    2004-02-01

    To gather preliminary data on how the three major types of emergency medicine (EM) providers, physicians, nurses (RNs), and out-of-hospital personnel (EMTs), differ in error identification, disclosure, and reporting. A convenience sample of emergency department (ED) providers completed a brief survey designed to evaluate error frequency, disclosure, and reporting practices as well as error-based discussion and educational activities. One hundred sixteen subjects participated: 41 EMTs (35%), 33 RNs (28%), and 42 physicians (36%). Forty-five percent of EMTs, 56% of RNs, and 21% of physicians identified no clinical errors during the preceding year. When errors were identified, physicians learned of them via dialogue with RNs (58%), patients (13%), pharmacy (35%), and attending physicians (35%). For known errors, all providers were equally unlikely to inform the team caring for the patient. Disclosure to patients was limited and varied by provider type (19% EMTs, 23% RNs, and 74% physicians). Disclosure education was rare, with

  4. An assessment of the Jenkinson and Collison synoptic classification to a continental mid-latitude location

    NASA Astrophysics Data System (ADS)

    Spellman, Greg

    2017-05-01

    A weather-type catalogue based on the Jenkinson and Collison method was developed for an area in south-west Russia for the period 1961-2010. Gridded sea level pressure data was obtained from the National Centers for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) reanalysis. The resulting catalogue was analysed for frequency of individual types and groups of weather types to characterise long-term atmospheric circulation in this region. Overall, the most frequent type is anticyclonic (A) (23.3 %) followed by cyclonic (C) (11.9 %); however, there are some key seasonal patterns with westerly circulation being significantly more common in winter than summer. The utility of this synoptic classification is evaluated by modelling daily rainfall amounts. A low level of error is found using a simple model based on the prevailing weather type. Finally, characteristics of the circulation classification are compared to those for the original JC British Isles catalogue and a much more equal distribution of flow types is seen in the former classification.

  5. Coal gasification system with a modulated on/off control system

    DOEpatents

    Fasching, George E.

    1984-01-01

    A modulated control system is provided for improving regulation of the bed level in a fixed-bed coal gasifier into which coal is fed from a rotary coal feeder. A nuclear bed level gauge using a cobalt source and an ion chamber detector is used to detect the coal bed level in the gasifier. The detector signal is compared to a bed level set point signal in a primary controller which operates in proportional/integral modes to produce an error signal. The error signal is modulated by the injection of a triangular wave signal of a frequency of about 0.0004 Hz and an amplitude of about 80% of the primary deadband. The modulated error signal is fed to a triple-deadband secondary controller which jogs the coal feeder speed up or down by on/off control of a feeder speed change driver such that the gasifier bed level is driven toward the set point while preventing excessive cycling (oscillation) common in on/off mode automatic controllers of this type. Regulation of the bed level is achieved without excessive feeder speed control jogging.

  6. Analysis of Student Errors on Division of Fractions

    NASA Astrophysics Data System (ADS)

    Maelasari, E.; Jupri, A.

    2017-02-01

    This study aims to describe the type of student errors that typically occurs at the completion of the division arithmetic operations on fractions, and to describe the causes of students’ mistakes. This research used a descriptive qualitative method, and involved 22 fifth grade students at one particular elementary school in Kuningan, Indonesia. The results of this study showed that students’ error answers caused by students changing their way of thinking to solve multiplication and division operations on the same procedures, the changing of mix fractions to common fraction have made students confused, and students are careless in doing calculation. From student written work, in solving the fraction problems, we found that there is influence between the uses of learning methods and student response, and some of student responses beyond researchers’ prediction. We conclude that the teaching method is not only the important thing that must be prepared, but the teacher should also prepare about predictions of students’ answers to the problems that will be given in the learning process. This could be a reflection for teachers to be better and to achieve the expected learning goals.

  7. Identifying and attributing common data quality problems: temperature and precipitation observations in Bolivia and Peru

    NASA Astrophysics Data System (ADS)

    Hunziker, Stefan; Gubler, Stefanie; Calle, Juan; Moreno, Isabel; Andrade, Marcos; Velarde, Fernando; Ticona, Laura; Carrasco, Gualberto; Castellón, Yaruska; Oria Rojas, Clara; Brönnimann, Stefan; Croci-Maspoli, Mischa; Konzelmann, Thomas; Rohrer, Mario

    2016-04-01

    Assessing climatological trends and extreme events requires high-quality data. However, for many regions of the world, observational data of the desired quality is not available. In order to eliminate errors in the data, quality control (QC) should be applied before data analysis. If the data still contains undetected errors and quality problems after QC, a consequence may be misleading and erroneous results. A region which is seriously affected by observational data quality problems is the Central Andes. At the same time, climatological information on ongoing climate change and climate risks are of utmost importance in this area due to its vulnerability to meteorological extreme events and climatic changes. Beside data quality issues, the lack of metadata and the low station network density complicate quality control and assessment, and hence, appropriate application of the data. Errors and data problems may occur at any point of the data generation chain, e.g. due to unsuitable station configuration or siting, poor station maintenance, erroneous instrument reading, or inaccurate data digitalization and post processing. Different measurement conditions in the predominantly conventional station networks in Bolivia and Peru compared to the mostly automated networks e.g. in Europe or Northern America may cause different types of errors. Hence, applying QC methods used on state of the art networks to Bolivian and Peruvian climate observations may not be suitable or sufficient. A comprehensive amount of Bolivian and Peruvian maximum and minimum temperature and precipitation in-situ measurements were analyzed to detect and describe common data quality problems. Furthermore, station visits and reviews of the original documents were done. Some of the errors could be attributed to a specific source. Such information is of great importance for data users, since it allows them to decide for what applications the data still can be used. In ideal cases, it may even allow to correct the error. Strategies on how to deal with data from the Central Andes will be suggested. However, the approach may be applicable to networks from other countries where conditions of climate observations are comparable.

  8. [Epidemiologic study of refractive errors in schoolchildren in socioeconomically deprived regions in Tunisia].

    PubMed

    Ayed, T; Sokkah, M; Charfi, O; El Matri, L

    2002-09-01

    This study's purpose was to estimate the prevalence of common refractive errors in schoolchildren in low socioeconomic regions in Tunisia and to assess their effect on school performance. This was a cross-sectional study done from November 1999 to January 2000 within the context of health care screening campaigns carried out by volunteer ophthalmologists and opticians in low-end socioeconomic regions in Tunisia. The concerned population was schoolchildren living in the cities of Tunis and Tabarka (North), Kerkena (Center), and Tozeur (South). We examined a total of 708 children with a mean age of 11.9 +/-3.21 years (from 6 to 20 years) and a sex ratio of 0.84. A cycloplegic refraction examination was performed on all the children. Statistical analyses with the chi squared test and the Fisher exact test allowed us to calculate the prevalence of the refractive errors totally and separately as well as the distribution according to age, sex, and region. We also searched for a possible relation between refractive errors and academic failure. Among the 708 children, 57.2% [CI(95)=53.4-60] had refractive errors, of which 31.6% [CI(95)=28.2-35.2] were hyperopic, whereas 9.1% [CI(95)=7.1-11.5] were myopic. Astigmatism was found in 16.4% [CI(95)=13.7-19.3]. The prevalence of myopia was significantly higher after the age of fourteen. It increased significantly with age (P=0.0003). The prevalence of hyperopia was significantly higher between the ages of 8 and 11 (P=0.0004). Hyperopic astigmatism was significantly more frequent between 6 and 9 years of age (P=0.001). There was no significant difference regarding sex. However, the distribution of the refractive errors by region showed a significantly high level of myopia in Tunis, Kerkena, and Tozeur. This difference disappeared with increasing age. The study of the effect of these refractive errors on school performance of these children from poor areas showed a significant association between all types of refractive errors and academic failure, with an odds ratio of 2.13 for all types of refractive errors, 2.69 for hyperopia, 2.87 for myopia, and 2.73 for astigmatism. This study showed the prevalence of refractive errors in a poor population of schoolchildren and emphasized the importance of such examinations. The ability of a child to participate in the educational experience is at least partially dependent on good vision.

  9. 13.1 micrometers hard X-ray focusing by a new type monocapillary X-ray optic designed for common laboratory X-ray source

    NASA Astrophysics Data System (ADS)

    Sun, Xuepeng; zhang, Xiaoyun; Zhu, Yu; Wang, Yabing; Shang, Hongzhong; Zhang, Fengshou; Liu, Zhiguo; Sun, Tianxi

    2018-04-01

    A new type of monocapillary X-ray optic, called 'two bounces monocapillary X-ray optics' (TBMXO), is proposed for generating a small focal spot with high power-density gain for micro X-ray analysis, using a common laboratory X-ray source. TBMXO is consists of two parts: an ellipsoidal part and a tapered part. Before experimental testing, the TBMXO was simulated by the ray tracing method in MATLAB. The simulated results predicted that the proposed TBMXO would produce a smaller focal spot with higher power-density gain than the ellipsoidal monocapillary X-ray optic (EMXO). In the experiment, the TBMXO performance was tested by both an optical device and a Cu target X-ray tube with focal spot of 100 μm. The results indicated that the TBMXO had a slope error of 57.6 μrad and a 13.1 μm focal spot and a 1360 gain in power density were obtained.

  10. Acceptance sampling for attributes via hypothesis testing and the hypergeometric distribution

    NASA Astrophysics Data System (ADS)

    Samohyl, Robert Wayne

    2017-10-01

    This paper questions some aspects of attribute acceptance sampling in light of the original concepts of hypothesis testing from Neyman and Pearson (NP). Attribute acceptance sampling in industry, as developed by Dodge and Romig (DR), generally follows the international standards of ISO 2859, and similarly the Brazilian standards NBR 5425 to NBR 5427 and the United States Standards ANSI/ASQC Z1.4. The paper evaluates and extends the area of acceptance sampling in two directions. First, by suggesting the use of the hypergeometric distribution to calculate the parameters of sampling plans avoiding the unnecessary use of approximations such as the binomial or Poisson distributions. We show that, under usual conditions, discrepancies can be large. The conclusion is that the hypergeometric distribution, ubiquitously available in commonly used software, is more appropriate than other distributions for acceptance sampling. Second, and more importantly, we elaborate the theory of acceptance sampling in terms of hypothesis testing rigorously following the original concepts of NP. By offering a common theoretical structure, hypothesis testing from NP can produce a better understanding of applications even beyond the usual areas of industry and commerce such as public health and political polling. With the new procedures, both sample size and sample error can be reduced. What is unclear in traditional acceptance sampling is the necessity of linking the acceptable quality limit (AQL) exclusively to the producer and the lot quality percent defective (LTPD) exclusively to the consumer. In reality, the consumer should also be preoccupied with a value of AQL, as should the producer with LTPD. Furthermore, we can also question why type I error is always uniquely associated with the producer as producer risk, and likewise, the same question arises with consumer risk which is necessarily associated with type II error. The resolution of these questions is new to the literature. The article presents R code throughout.

  11. Uncertainty and inference in the world of paleoecological data

    NASA Astrophysics Data System (ADS)

    McLachlan, J. S.; Dawson, A.; Dietze, M.; Finley, M.; Hooten, M.; Itter, M.; Jackson, S. T.; Marlon, J. R.; Raiho, A.; Tipton, J.; Williams, J.

    2017-12-01

    Proxy data in paleoecology and paleoclimatology share a common set of biases and uncertainties: spatiotemporal error associated with the taphonomic processes of deposition, preservation, and dating; calibration error between proxy data and the ecosystem states of interest; and error in the interpolation of calibrated estimates across space and time. Researchers often account for this daunting suite of challenges by applying qualitave expert judgment: inferring the past states of ecosystems and assessing the level of uncertainty in those states subjectively. The effectiveness of this approach can be seen by the extent to which future observations confirm previous assertions. Hierarchical Bayesian (HB) statistical approaches allow an alternative approach to accounting for multiple uncertainties in paleo data. HB estimates of ecosystem state formally account for each of the common uncertainties listed above. HB approaches can readily incorporate additional data, and data of different types into estimates of ecosystem state. And HB estimates of ecosystem state, with associated uncertainty, can be used to constrain forecasts of ecosystem dynamics based on mechanistic ecosystem models using data assimilation. Decisions about how to structure an HB model are also subjective, which creates a parallel framework for deciding how to interpret data from the deep past.Our group, the Paleoecological Observatory Network (PalEON), has applied hierarchical Bayesian statistics to formally account for uncertainties in proxy based estimates of past climate, fire, primary productivity, biomass, and vegetation composition. Our estimates often reveal new patterns of past ecosystem change, which is an unambiguously good thing, but we also often estimate a level of uncertainty that is uncomfortably high for many researchers. High levels of uncertainty are due to several features of the HB approach: spatiotemporal smoothing, the formal aggregation of multiple types of uncertainty, and a coarseness in statistical models of taphonomic process. Each of these features provides useful opportunities for statisticians and data-generating researchers to assess what we know about the signal and the noise in paleo data and to improve inference about past changes in ecosystem state.

  12. Development of an Ontology to Model Medical Errors, Information Needs, and the Clinical Communication Space

    PubMed Central

    Stetson, Peter D.; McKnight, Lawrence K.; Bakken, Suzanne; Curran, Christine; Kubose, Tate T.; Cimino, James J.

    2002-01-01

    Medical errors are common, costly and often preventable. Work in understanding the proximal causes of medical errors demonstrates that systems failures predispose to adverse clinical events. Most of these systems failures are due to lack of appropriate information at the appropriate time during the course of clinical care. Problems with clinical communication are common proximal causes of medical errors. We have begun a project designed to measure the impact of wireless computing on medical errors. We report here on our efforts to develop an ontology representing the intersection of medical errors, information needs and the communication space. We will use this ontology to support the collection, storage and interpretation of project data. The ontology’s formal representation of the concepts in this novel domain will help guide the rational deployment of our informatics interventions. A real-life scenario is evaluated using the ontology in order to demonstrate its utility.

  13. A pilot study of the safety implications of Australian nurses' sleep and work hours.

    PubMed

    Dorrian, Jillian; Lamond, Nicole; van den Heuvel, Cameron; Pincombe, Jan; Rogers, Ann E; Dawson, Drew

    2006-01-01

    The frequency and severity of adverse events in Australian healthcare is under increasing scrutiny. A recent state government report identified 31 events involving "death or serious [patient] harm" and 452 "very high risk" incidents. Australia-wide, a previous study identified 2,324 adverse medical events (AME) in a single year, with more than half considered preventable. Despite the recognized link between fatigue and error in other industries, to date, few studies of medical errors have assessed the fatigue of the healthcare professionals involved. Nurses work extended and unpredictable hours with a lack of regular breaks and are therefore likely to experience elevated fatigue. Currently, there is very little available information on Australian nurses' sleep or fatigue levels, nor is there any information about whether this affects their performance. This study therefore aims to examine work hours, sleep, fatigue and error occurrence in Australian nurses. Using logbooks, 23 full-time nurses in a metropolitan hospital completed daily recordings for one month (644 days, 377 shifts) of their scheduled and actual work hours, sleep length and quality, sleepiness, and fatigue levels. Frequency and type of nursing errors, near errors, and observed errors (made by others) were recorded. Nurses reported struggling to remain awake during 36% of shifts. Moderate to high levels of stress, physical exhaustion, and mental exhaustion were reported on 23%, 40%, and 36% of shifts, respectively. Extreme drowsiness while driving or cycling home was reported on 45 occasions (11.5%), with three reports of near accidents. Overall, 20 errors, 13 near errors, and 22 observed errors were reported. The perceived potential consequences for the majority of errors were minor; however, 11 errors were associated with moderate and four with potentially severe consequences. Nurses reported that they had trouble falling asleep on 26.8% of days, had frequent arousals on 34.0% of days, and that work-related concerns were either partially or fully responsible for their sleep disruption on 12.5% of occasions. Fourteen out of the 23 nurses reported using a sleep aid. The most commonly reported sleep aids were prescription medications (62.7%), followed by alcohol (26.9%). Total sleep duration was significantly shorter on workdays than days off (p < 0.01). In comparison to other workdays, sleep was significantly shorter on days when an error (p < 0.05) or a near error (p < 0.01) was recorded. In contrast, sleep was higher on workdays when someone else's error was recorded (p = 0.08). Logistic regression analysis indicated that sleep duration was a significant predictor of error occurrence (chi2 = 6.739, p = 0.009, e beta = 0.727). The findings of this pilot study suggest that Australian nurses experience sleepiness and related physical symptoms at work and during their trip home. Further, a measurable number of errors occur of various types and severity. Less sleep may lead to the increased likelihood of making an error, and importantly, the decreased likelihood of catching someone else's error. These pilot results suggest that further investigation into the effects of sleep loss in nursing may be necessary for patient safety from an individual nurse perspective and from a healthcare team perspective.

  14. Math Error Types and Correlates in Adolescents with and without Attention Deficit Hyperactivity Disorder

    PubMed Central

    Capodieci, Agnese; Martinussen, Rhonda

    2017-01-01

    Objective: The aim of this study was to examine the types of errors made by youth with and without a parent-reported diagnosis of attention deficit and hyperactivity disorder (ADHD) on a math fluency task and investigate the association between error types and youths’ performance on measures of processing speed and working memory. Method: Participants included 30 adolescents with ADHD and 39 typically developing peers between 14 and 17 years old matched in age and IQ. All youth completed standardized measures of math calculation and fluency as well as two tests of working memory and processing speed. Math fluency error patterns were examined. Results: Adolescents with ADHD showed less proficient math fluency despite having similar math calculation scores as their peers. Group differences were also observed in error types with youth with ADHD making more switch errors than their peers. Conclusion: This research has important clinical applications for the assessment and intervention on math ability in students with ADHD. PMID:29075227

  15. Math Error Types and Correlates in Adolescents with and without Attention Deficit Hyperactivity Disorder.

    PubMed

    Capodieci, Agnese; Martinussen, Rhonda

    2017-01-01

    Objective: The aim of this study was to examine the types of errors made by youth with and without a parent-reported diagnosis of attention deficit and hyperactivity disorder (ADHD) on a math fluency task and investigate the association between error types and youths' performance on measures of processing speed and working memory. Method: Participants included 30 adolescents with ADHD and 39 typically developing peers between 14 and 17 years old matched in age and IQ. All youth completed standardized measures of math calculation and fluency as well as two tests of working memory and processing speed. Math fluency error patterns were examined. Results: Adolescents with ADHD showed less proficient math fluency despite having similar math calculation scores as their peers. Group differences were also observed in error types with youth with ADHD making more switch errors than their peers. Conclusion: This research has important clinical applications for the assessment and intervention on math ability in students with ADHD.

  16. Putting Meaning Back Into the Mean: A Comment on the Misuse of Elementary Statistics in a Sample of Manuscripts Submitted to Clinical Therapeutics.

    PubMed

    Forrester, Janet E

    2015-12-01

    Errors in the statistical presentation and analyses of data in the medical literature remain common despite efforts to improve the review process, including the creation of guidelines for authors and the use of statistical reviewers. This article discusses common elementary statistical errors seen in manuscripts recently submitted to Clinical Therapeutics and describes some ways in which authors and reviewers can identify errors and thus correct them before publication. A nonsystematic sample of manuscripts submitted to Clinical Therapeutics over the past year was examined for elementary statistical errors. Clinical Therapeutics has many of the same errors that reportedly exist in other journals. Authors require additional guidance to avoid elementary statistical errors and incentives to use the guidance. Implementation of reporting guidelines for authors and reviewers by journals such as Clinical Therapeutics may be a good approach to reduce the rate of statistical errors. Copyright © 2015 Elsevier HS Journals, Inc. All rights reserved.

  17. A new approach based on Machine Learning for predicting corneal curvature (K1) and astigmatism in patients with keratoconus after intracorneal ring implantation.

    PubMed

    Valdés-Mas, M A; Martín-Guerrero, J D; Rupérez, M J; Pastor, F; Dualde, C; Monserrat, C; Peris-Martínez, C

    2014-08-01

    Keratoconus (KC) is the most common type of corneal ectasia. A corneal transplantation was the treatment of choice until the last decade. However, intra-corneal ring implantation has become more and more common, and it is commonly used to treat KC thus avoiding a corneal transplantation. This work proposes a new approach based on Machine Learning to predict the vision gain of KC patients after ring implantation. That vision gain is assessed by means of the corneal curvature and the astigmatism. Different models were proposed; the best results were achieved by an artificial neural network based on the Multilayer Perceptron. The error provided by the best model was 0.97D of corneal curvature and 0.93D of astigmatism. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  18. Using Weather Data and Climate Model Output in Economic Analyses of Climate Change

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Auffhammer, M.; Hsiang, S. M.; Schlenker, W.

    2013-06-28

    Economists are increasingly using weather data and climate model output in analyses of the economic impacts of climate change. This article introduces a set of weather data sets and climate models that are frequently used, discusses the most common mistakes economists make in using these products, and identifies ways to avoid these pitfalls. We first provide an introduction to weather data, including a summary of the types of datasets available, and then discuss five common pitfalls that empirical researchers should be aware of when using historical weather data as explanatory variables in econometric applications. We then provide a brief overviewmore » of climate models and discuss two common and significant errors often made by economists when climate model output is used to simulate the future impacts of climate change on an economic outcome of interest.« less

  19. Delays in the operating room: signs of an imperfect system.

    PubMed

    Wong, Janice; Khu, Kathleen Joy; Kaderali, Zul; Bernstein, Mark

    2010-06-01

    Delays in the operating room have a negative effect on its efficiency and the working environment. In this prospective study, we analyzed data on perioperative system delays. One neurosurgeon prospectively recorded all errors, including perioperative delays, for consecutive patients undergoing elective procedures from May 2000 to February 2009. We analyzed the prevalence, causes and impact of perioperative system delays that occurred in one neurosurgeon's practice. A total of 1531 elective surgical cases were performed during the study period. Delays were the most common type of error (33.6%), and more than half (51.4%) of all cases had at least 1 delay. The most common cause of delay was equipment failure. The first cases of the day and cranial cases had more delays than subsequent cases and spinal cases, respectively. A delay in starting the first case was associated with subsequent delays. Delays frequently occur in the operating room and have a major effect on patient flow and resource utilization. Thorough documentation of perioperative delays provides a basis for the development of solutions for improving operating room efficiency and illustrates the principles underlying the causes of operating room delays across surgical disciplines.

  20. What is the epidemiology of medication errors, error-related adverse events and risk factors for errors in adults managed in community care contexts? A systematic review of the international literature.

    PubMed

    Assiri, Ghadah Asaad; Shebl, Nada Atef; Mahmoud, Mansour Adam; Aloudah, Nouf; Grant, Elizabeth; Aljadhey, Hisham; Sheikh, Aziz

    2018-05-05

    To investigate the epidemiology of medication errors and error-related adverse events in adults in primary care, ambulatory care and patients' homes. Systematic review. Six international databases were searched for publications between 1 January 2006 and 31 December 2015. Two researchers independently extracted data from eligible studies and assessed the quality of these using established instruments. Synthesis of data was informed by an appreciation of the medicines' management process and the conceptual framework from the International Classification for Patient Safety. 60 studies met the inclusion criteria, of which 53 studies focused on medication errors, 3 on error-related adverse events and 4 on risk factors only. The prevalence of prescribing errors was reported in 46 studies: prevalence estimates ranged widely from 2% to 94%. Inappropriate prescribing was the most common type of error reported. Only one study reported the prevalence of monitoring errors, finding that incomplete therapeutic/safety laboratory-test monitoring occurred in 73% of patients. The incidence of preventable adverse drug events (ADEs) was estimated as 15/1000 person-years, the prevalence of drug-drug interaction-related adverse drug reactions as 7% and the prevalence of preventable ADE as 0.4%. A number of patient, healthcare professional and medication-related risk factors were identified, including the number of medications used by the patient, increased patient age, the number of comorbidities, use of anticoagulants, cases where more than one physician was involved in patients' care and care being provided by family physicians/general practitioners. A very wide variation in the medication error and error-related adverse events rates is reported in the studies, this reflecting heterogeneity in the populations studied, study designs employed and outcomes evaluated. This review has identified important limitations and discrepancies in the methodologies used and gaps in the literature on the epidemiology and outcomes of medication errors in community settings. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  1. Cognitive tests predict real-world errors: the relationship between drug name confusion rates in laboratory-based memory and perception tests and corresponding error rates in large pharmacy chains.

    PubMed

    Schroeder, Scott R; Salomon, Meghan M; Galanter, William L; Schiff, Gordon D; Vaida, Allen J; Gaunt, Michael J; Bryson, Michelle L; Rash, Christine; Falck, Suzanne; Lambert, Bruce L

    2017-05-01

    Drug name confusion is a common type of medication error and a persistent threat to patient safety. In the USA, roughly one per thousand prescriptions results in the wrong drug being filled, and most of these errors involve drug names that look or sound alike. Prior to approval, drug names undergo a variety of tests to assess their potential for confusability, but none of these preapproval tests has been shown to predict real-world error rates. We conducted a study to assess the association between error rates in laboratory-based tests of drug name memory and perception and real-world drug name confusion error rates. Eighty participants, comprising doctors, nurses, pharmacists, technicians and lay people, completed a battery of laboratory tests assessing visual perception, auditory perception and short-term memory of look-alike and sound-alike drug name pairs (eg, hydroxyzine/hydralazine). Laboratory test error rates (and other metrics) significantly predicted real-world error rates obtained from a large, outpatient pharmacy chain, with the best-fitting model accounting for 37% of the variance in real-world error rates. Cross-validation analyses confirmed these results, showing that the laboratory tests also predicted errors from a second pharmacy chain, with 45% of the variance being explained by the laboratory test data. Across two distinct pharmacy chains, there is a strong and significant association between drug name confusion error rates observed in the real world and those observed in laboratory-based tests of memory and perception. Regulators and drug companies seeking a validated preapproval method for identifying confusing drug names ought to consider using these simple tests. By using a standard battery of memory and perception tests, it should be possible to reduce the number of confusing look-alike and sound-alike drug name pairs that reach the market, which will help protect patients from potentially harmful medication errors. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  2. Reliability Generalization: The Importance of Considering Sample Specificity, Confident Intervals, and Subgroup Differences.

    ERIC Educational Resources Information Center

    Onwuegbuzie, Anthony J.; Daniel, Larry G.

    The purposes of this paper are to identify common errors made by researchers when dealing with reliability coefficients and to outline best practices for reporting and interpreting reliability coefficients. Common errors that researchers make are: (1) stating that the instruments are reliable; (2) incorrectly interpreting correlation coefficients;…

  3. Rank score and permutation testing alternatives for regression quantile estimates

    USGS Publications Warehouse

    Cade, B.S.; Richards, J.D.; Mielke, P.W.

    2006-01-01

    Performance of quantile rank score tests used for hypothesis testing and constructing confidence intervals for linear quantile regression estimates (0 ≤ τ ≤ 1) were evaluated by simulation for models with p = 2 and 6 predictors, moderate collinearity among predictors, homogeneous and hetero-geneous errors, small to moderate samples (n = 20–300), and central to upper quantiles (0.50–0.99). Test statistics evaluated were the conventional quantile rank score T statistic distributed as χ2 random variable with q degrees of freedom (where q parameters are constrained by H 0:) and an F statistic with its sampling distribution approximated by permutation. The permutation F-test maintained better Type I errors than the T-test for homogeneous error models with smaller n and more extreme quantiles τ. An F distributional approximation of the F statistic provided some improvements in Type I errors over the T-test for models with > 2 parameters, smaller n, and more extreme quantiles but not as much improvement as the permutation approximation. Both rank score tests required weighting to maintain correct Type I errors when heterogeneity under the alternative model increased to 5 standard deviations across the domain of X. A double permutation procedure was developed to provide valid Type I errors for the permutation F-test when null models were forced through the origin. Power was similar for conditions where both T- and F-tests maintained correct Type I errors but the F-test provided some power at smaller n and extreme quantiles when the T-test had no power because of excessively conservative Type I errors. When the double permutation scheme was required for the permutation F-test to maintain valid Type I errors, power was less than for the T-test with decreasing sample size and increasing quantiles. Confidence intervals on parameters and tolerance intervals for future predictions were constructed based on test inversion for an example application relating trout densities to stream channel width:depth.

  4. Ensemble codes involving hippocampal neurons are at risk during delayed performance tests.

    PubMed

    Hampson, R E; Deadwyler, S A

    1996-11-26

    Multielectrode recording techniques were used to record ensemble activity from 10 to 16 simultaneously active CA1 and CA3 neurons in the rat hippocampus during performance of a spatial delayed-nonmatch-to-sample task. Extracted sources of variance were used to assess the nature of two different types of errors that accounted for 30% of total trials. The two types of errors included ensemble "miscodes" of sample phase information and errors associated with delay-dependent corruption or disappearance of sample information at the time of the nonmatch response. Statistical assessment of trial sequences and associated "strength" of hippocampal ensemble codes revealed that miscoded error trials always followed delay-dependent error trials in which encoding was "weak," indicating that the two types of errors were "linked." It was determined that the occurrence of weakly encoded, delay-dependent error trials initiated an ensemble encoding "strategy" that increased the chances of being correct on the next trial and avoided the occurrence of further delay-dependent errors. Unexpectedly, the strategy involved "strongly" encoding response position information from the prior (delay-dependent) error trial and carrying it forward to the sample phase of the next trial. This produced a miscode type error on trials in which the "carried over" information obliterated encoding of the sample phase response on the next trial. Application of this strategy, irrespective of outcome, was sufficient to reorient the animal to the proper between trial sequence of response contingencies (nonmatch-to-sample) and boost performance to 73% correct on subsequent trials. The capacity for ensemble analyses of strength of information encoding combined with statistical assessment of trial sequences therefore provided unique insight into the "dynamic" nature of the role hippocampus plays in delay type memory tasks.

  5. Geolocation error tracking of ZY-3 three line cameras

    NASA Astrophysics Data System (ADS)

    Pan, Hongbo

    2017-01-01

    The high-accuracy geolocation of high-resolution satellite images (HRSIs) is a key issue for mapping and integrating multi-temporal, multi-sensor images. In this manuscript, we propose a new geometric frame for analysing the geometric error of a stereo HRSI, in which the geolocation error can be divided into three parts: the epipolar direction, cross base direction, and height direction. With this frame, we proved that the height error of three line cameras (TLCs) is independent of nadir images, and that the terrain effect has a limited impact on the geolocation errors. For ZY-3 error sources, the drift error in both the pitch and roll angle and its influence on the geolocation accuracy are analysed. Epipolar and common tie-point constraints are proposed to study the bundle adjustment of HRSIs. Epipolar constraints explain that the relative orientation can reduce the number of compensation parameters in the cross base direction and have a limited impact on the height accuracy. The common tie points adjust the pitch-angle errors to be consistent with each other for TLCs. Therefore, free-net bundle adjustment of a single strip cannot significantly improve the geolocation accuracy. Furthermore, the epipolar and common tie-point constraints cause the error to propagate into the adjacent strip when multiple strips are involved in the bundle adjustment, which results in the same attitude uncertainty throughout the whole block. Two adjacent strips-Orbit 305 and Orbit 381, covering 7 and 12 standard scenes separately-and 308 ground control points (GCPs) were used for the experiments. The experiments validate the aforementioned theory. The planimetric and height root mean square errors were 2.09 and 1.28 m, respectively, when two GCPs were settled at the beginning and end of the block.

  6. A cognitive taxonomy of medical errors.

    PubMed

    Zhang, Jiajie; Patel, Vimla L; Johnson, Todd R; Shortliffe, Edward H

    2004-06-01

    Propose a cognitive taxonomy of medical errors at the level of individuals and their interactions with technology. Use cognitive theories of human error and human action to develop the theoretical foundations of the taxonomy, develop the structure of the taxonomy, populate the taxonomy with examples of medical error cases, identify cognitive mechanisms for each category of medical error under the taxonomy, and apply the taxonomy to practical problems. Four criteria were used to evaluate the cognitive taxonomy. The taxonomy should be able (1) to categorize major types of errors at the individual level along cognitive dimensions, (2) to associate each type of error with a specific underlying cognitive mechanism, (3) to describe how and explain why a specific error occurs, and (4) to generate intervention strategies for each type of error. The proposed cognitive taxonomy largely satisfies the four criteria at a theoretical and conceptual level. Theoretically, the proposed cognitive taxonomy provides a method to systematically categorize medical errors at the individual level along cognitive dimensions, leads to a better understanding of the underlying cognitive mechanisms of medical errors, and provides a framework that can guide future studies on medical errors. Practically, it provides guidelines for the development of cognitive interventions to decrease medical errors and foundation for the development of medical error reporting system that not only categorizes errors but also identifies problems and helps to generate solutions. To validate this model empirically, we will next be performing systematic experimental studies.

  7. Error Analysis in Mathematics. Technical Report #1012

    ERIC Educational Resources Information Center

    Lai, Cheng-Fei

    2012-01-01

    Error analysis is a method commonly used to identify the cause of student errors when they make consistent mistakes. It is a process of reviewing a student's work and then looking for patterns of misunderstanding. Errors in mathematics can be factual, procedural, or conceptual, and may occur for a number of reasons. Reasons why students make…

  8. Information-Gathering Patterns Associated with Higher Rates of Diagnostic Error

    ERIC Educational Resources Information Center

    Delzell, John E., Jr.; Chumley, Heidi; Webb, Russell; Chakrabarti, Swapan; Relan, Anju

    2009-01-01

    Diagnostic errors are an important source of medical errors. Problematic information-gathering is a common cause of diagnostic errors among physicians and medical students. The objectives of this study were to (1) determine if medical students' information-gathering patterns formed clusters of similar strategies, and if so (2) to calculate the…

  9. Lexical Errors and Accuracy in Foreign Language Writing. Second Language Acquisition

    ERIC Educational Resources Information Center

    del Pilar Agustin Llach, Maria

    2011-01-01

    Lexical errors are a determinant in gaining insight into vocabulary acquisition, vocabulary use and writing quality assessment. Lexical errors are very frequent in the written production of young EFL learners, but they decrease as learners gain proficiency. Misspellings are the most common category, but formal errors give way to semantic-based…

  10. More on Systematic Error in a Boyle's Law Experiment

    ERIC Educational Resources Information Center

    McCall, Richard P.

    2012-01-01

    A recent article in "The Physics Teacher" describes a method for analyzing a systematic error in a Boyle's law laboratory activity. Systematic errors are important to consider in physics labs because they tend to bias the results of measurements. There are numerous laboratory examples and resources that discuss this common source of error.

  11. Random measurement error: Why worry? An example of cardiovascular risk factors.

    PubMed

    Brakenhoff, Timo B; van Smeden, Maarten; Visseren, Frank L J; Groenwold, Rolf H H

    2018-01-01

    With the increased use of data not originally recorded for research, such as routine care data (or 'big data'), measurement error is bound to become an increasingly relevant problem in medical research. A common view among medical researchers on the influence of random measurement error (i.e. classical measurement error) is that its presence leads to some degree of systematic underestimation of studied exposure-outcome relations (i.e. attenuation of the effect estimate). For the common situation where the analysis involves at least one exposure and one confounder, we demonstrate that the direction of effect of random measurement error on the estimated exposure-outcome relations can be difficult to anticipate. Using three example studies on cardiovascular risk factors, we illustrate that random measurement error in the exposure and/or confounder can lead to underestimation as well as overestimation of exposure-outcome relations. We therefore advise medical researchers to refrain from making claims about the direction of effect of measurement error in their manuscripts, unless the appropriate inferential tools are used to study or alleviate the impact of measurement error from the analysis.

  12. A new accuracy measure based on bounded relative error for time series forecasting

    PubMed Central

    Twycross, Jamie; Garibaldi, Jonathan M.

    2017-01-01

    Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred. PMID:28339480

  13. A new accuracy measure based on bounded relative error for time series forecasting.

    PubMed

    Chen, Chao; Twycross, Jamie; Garibaldi, Jonathan M

    2017-01-01

    Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred.

  14. Errors Analysis of Students in Mathematics Department to Learn Plane Geometry

    NASA Astrophysics Data System (ADS)

    Mirna, M.

    2018-04-01

    This article describes the results of qualitative descriptive research that reveal the locations, types and causes of student error in answering the problem of plane geometry at the problem-solving level. Answers from 59 students on three test items informed that students showed errors ranging from understanding the concepts and principles of geometry itself to the error in applying it to problem solving. Their type of error consists of concept errors, principle errors and operational errors. The results of reflection with four subjects reveal the causes of the error are: 1) student learning motivation is very low, 2) in high school learning experience, geometry has been seen as unimportant, 3) the students' experience using their reasoning in solving the problem is very less, and 4) students' reasoning ability is still very low.

  15. A Decision Theoretic Approach to Evaluate Radiation Detection Algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nobles, Mallory A.; Sego, Landon H.; Cooley, Scott K.

    2013-07-01

    There are a variety of sensor systems deployed at U.S. border crossings and ports of entry that scan for illicit nuclear material. In this work, we develop a framework for comparing the performance of detection algorithms that interpret the output of these scans and determine when secondary screening is needed. We optimize each algorithm to minimize its risk, or expected loss. We measure an algorithm’s risk by considering its performance over a sample, the probability distribution of threat sources, and the consequence of detection errors. While it is common to optimize algorithms by fixing one error rate and minimizing another,more » our framework allows one to simultaneously consider multiple types of detection errors. Our framework is flexible and easily adapted to many different assumptions regarding the probability of a vehicle containing illicit material, and the relative consequences of a false positive and false negative errors. Our methods can therefore inform decision makers of the algorithm family and parameter values which best reduce the threat from illicit nuclear material, given their understanding of the environment at any point in time. To illustrate the applicability of our methods, in this paper, we compare the risk from two families of detection algorithms and discuss the policy implications of our results.« less

  16. Action Monitoring in boys with ADHD, their Nonaffected Siblings and Normal Controls: Evidence for an Endophenotype

    PubMed Central

    Albrecht, Bjoern; Brandeis, Daniel; Uebel, Henrik; Heinrich, Hartmut; Mueller, Ueli C.; Hasselhorn, Marcus; Steinhausen, Hans-Christoph; Rothenberger, Aribert; Banaschewski, Tobias

    2008-01-01

    Background Attention deficit/hyperactivity disorder is a very common and highly heritable child psychiatric disorder associated with dysfunctions in fronto-striatal networks that control attention and response organisation. Aim of this study was to investigate whether features of action monitoring related to dopaminergic functions represent endophenotypes which are brain functions on the pathway from genes and environmental risk factors to behaviour. Methods Action monitoring and error processing as indicated by behavioural and electrophysiological parameters during a flanker task were examined in boys with ADHD combined type according to DSM-IV (N=68), their nonaffected siblings (N=18) and healthy controls with no known family history of ADHD (N=22). Results Boys with ADHD displayed slower and more variable reaction-times. Error negativity (Ne) was smaller in boys with ADHD compared to healthy controls, while nonaffected siblings displayed intermediate amplitudes following a linear model predicted by genetic concordance. The three groups did not differ on error positivity (Pe). N2 amplitude enhancement due to conflict (incongruent flankers) was reduced in the ADHD group. Nonaffected siblings also displayed intermediate N2 enhancement. Conclusions Converging evidence from behavioural and ERP findings suggests that action monitoring and initial error processing, both related to dopaminergically modulated functions of anterior cingulate cortex, might be an endophenotype related to ADHD. PMID:18339358

  17. What triggers catch-up saccades during visual tracking?

    PubMed

    de Brouwer, Sophie; Yuksel, Demet; Blohm, Gunnar; Missal, Marcus; Lefèvre, Philippe

    2002-03-01

    When tracking moving visual stimuli, primates orient their visual axis by combining two kinds of eye movements, smooth pursuit and saccades, that have very different dynamics. Yet, the mechanisms that govern the decision to switch from one type of eye movement to the other are still poorly understood, even though they could bring a significant contribution to the understanding of how the CNS combines different kinds of control strategies to achieve a common motor and sensory goal. In this study, we investigated the oculomotor responses to a large range of different combinations of position error and velocity error during visual tracking of moving stimuli in humans. We found that the oculomotor system uses a prediction of the time at which the eye trajectory will cross the target, defined as the "eye crossing time" (T(XE)). The eye crossing time, which depends on both position error and velocity error, is the criterion used to switch between smooth and saccadic pursuit, i.e., to trigger catch-up saccades. On average, for T(XE) between 40 and 180 ms, no saccade is triggered and target tracking remains purely smooth. Conversely, when T(XE) becomes smaller than 40 ms or larger than 180 ms, a saccade is triggered after a short latency (around 125 ms).

  18. Improved Conflict Detection for Reducing Operational Errors in Air Traffic Control

    NASA Technical Reports Server (NTRS)

    Paielli, Russell A.; Erzberger, Hainz

    2003-01-01

    An operational error is an incident in which an air traffic controller allows the separation between two aircraft to fall below the minimum separation standard. The rates of such errors in the US have increased significantly over the past few years. This paper proposes new detection methods that can help correct this trend by improving on the performance of Conflict Alert, the existing software in the Host Computer System that is intended to detect and warn controllers of imminent conflicts. In addition to the usual trajectory based on the flight plan, a "dead-reckoning" trajectory (current velocity projection) is also generated for each aircraft and checked for conflicts. Filters for reducing common types of false alerts were implemented. The new detection methods were tested in three different ways. First, a simple flightpath command language was developed t o generate precisely controlled encounters for the purpose of testing the detection software. Second, written reports and tracking data were obtained for actual operational errors that occurred in the field, and these were "replayed" to test the new detection algorithms. Finally, the detection methods were used to shadow live traffic, and performance was analysed, particularly with regard to the false-alert rate. The results indicate that the new detection methods can provide timely warnings of imminent conflicts more consistently than Conflict Alert.

  19. Acoustic evidence for phonologically mismatched speech errors.

    PubMed

    Gormley, Andrea

    2015-04-01

    Speech errors are generally said to accommodate to their new phonological context. This accommodation has been validated by several transcription studies. The transcription methodology is not the best choice for detecting errors at this level, however, as this type of error can be difficult to perceive. This paper presents an acoustic analysis of speech errors that uncovers non-accommodated or mismatch errors. A mismatch error is a sub-phonemic error that results in an incorrect surface phonology. This type of error could arise during the processing of phonological rules or they could be made at the motor level of implementation. The results of this work have important implications for both experimental and theoretical research. For experimentalists, it validates the tools used for error induction and the acoustic determination of errors free of the perceptual bias. For theorists, this methodology can be used to test the nature of the processes proposed in language production.

  20. The pitfalls of premature closure: clinical decision-making in a case of aortic dissection

    PubMed Central

    Kumar, Bharat; Kanna, Balavenkatesh; Kumar, Suresh

    2011-01-01

    Premature closure is a type of cognitive error in which the physician fails to consider reasonable alternatives after an initial diagnosis is made. It is a common cause of delayed diagnosis and misdiagnosis borne out of a faulty clinical decision-making process. The authors present a case of aortic dissection in which premature closure was avoided by the aggressive pursuit of the appropriate differential diagnosis, and discuss the importance of disciplined clinical decision-making in the setting of chest pain. PMID:22679162

  1. Prevention, Evaluation, and Rehabilitation of Cycling-Related Injury.

    PubMed

    Kotler, Dana H; Babu, Ashwin N; Robidoux, Greg

    2016-01-01

    The unique quality of the bicycle is its ability to accommodate a wide variety of injuries and disabilities. Cycling for recreation, transportation, and competition is growing nationwide, and has proven health and societal benefits. The demands of each type of cycling dictate the necessary equipment, as well as potential for injury. Prevention of cycling-related injury in both the athlete and the recreational cyclist involves understanding the common mechanisms for both traumatic and overuse injury, and early correction of strength and flexibility imbalances, technique errors, and bicycle fit.

  2. Determination of Type I Error Rates and Power of Answer Copying Indices under Various Conditions

    ERIC Educational Resources Information Center

    Yormaz, Seha; Sünbül, Önder

    2017-01-01

    This study aims to determine the Type I error rates and power of S[subscript 1] , S[subscript 2] indices and kappa statistic at detecting copying on multiple-choice tests under various conditions. It also aims to determine how copying groups are created in order to calculate how kappa statistics affect Type I error rates and power. In this study,…

  3. Pharmacists' interventions in prescribing errors at hospital discharge: an observational study in the context of an electronic prescribing system in a UK teaching hospital.

    PubMed

    Abdel-Qader, Derar H; Harper, Lindsay; Cantrill, Judith A; Tully, Mary P

    2010-11-01

    Pharmacists have an essential role in improving drug usage and preventing prescribing errors (PEs). PEs at the interface of care are common, sometimes leading to adverse drug events (ADEs). This was the first study to investigate, using a computerized search method, the number, types, severity, pharmacists' impact on PEs and predictors of PEs in the context of electronic prescribing (e-prescribing) at hospital discharge. This was a retrospective, observational, 4-week study, carried out in 2008 in the Medical and Elderly Care wards of a 904-bed teaching hospital in the northwest of England, operating an e-prescribing system at discharge. Details were obtained, using a systematic computerized search of the system, of medication orders either entered by doctors and discontinued by pharmacists or entered by pharmacists. Meetings were conducted within 5 days of data extraction with pharmacists doing their routine clinical work, who categorized the occurrence, type and severity of their interventions using a scale. An independent senior pharmacist retrospectively rated the severity and potential impact, and subjectively judged, based on experience, whether any error was a computer-related error (CRE). Discrepancies were resolved by multidisciplinary discussion. The Statistical Package for Social Sciences was used for descriptive data analysis. For the PE predictors, a multivariate logistic regression was performed using STATA 7. Nine predictors were selected a priori from available prescribers', patients' and drug data. There were 7920 medication orders entered for 1038 patients (doctors entered 7712 orders; pharmacists entered 208 omitted orders). There were 675 (8.5% of 7920) interventions by pharmacists; 11 were not associated with PEs. Incidences of erroneous orders and patients with error were 8.0% (95% CI 7.4, 8.5 [n = 630/7920]) and 20.4% (95% CI 18.1, 22.9 [n = 212/1038]), respectively. The PE incidence was 8.4% (95% CI 7.8, 9.0 [n = 664/7920]). The top three medications associated with PEs were paracetamol (acetaminophen; 30 [4.8%]), salbutamol (albuterol; 28 [4.4%]) and omeprazole (25 [4.0%]). Pharmacists intercepted 524 (83.2%) erroneous orders without referring to doctors, and 70% of erroneous orders within 24 hours. Omission (31.0%), drug selection (29.4%) and dosage regimen (18.1%) error types accounted for >75% of PEs. There were 18 (2.9%) serious, 481 (76.3%) significant and 131 (20.8%) minor erroneous orders. Most erroneous orders (469 [74.4%]) were rated as of significant severity and significant impact of pharmacists on PEs. CREs (n = 279) accounted for 44.3% of erroneous orders. There was a significant difference in severity between CREs and non-CREs (χ2 = 38.88; df = 4; p < 0.001), with CREs being less severe than non-CREs. Drugs with multiple oral formulations (odds ratio [OR] 2.1; 95% CI 1.25, 3.37; p = 0.004) and prescribing by junior doctors (OR 2.54; 95% CI 1.08, 5.99; p = 0.03) were significant predictors of PEs. PEs commonly occur at hospital discharge, even with the use of an e-prescribing system. User and computer factors both appeared to contribute to the high error rate. The e-prescribing system facilitated the systematic extraction of data to investigate PEs in hospital practice. Pharmacists play an important role in rapidly documenting and preventing PEs before they reach and possibly harm patients. Pharmacists should understand CREs, so they complement, rather than duplicate, the e-prescribing system's strengths.

  4. A comparison of acoustic montoring methods for common anurans of the northeastern United States

    USGS Publications Warehouse

    Brauer, Corinne; Donovan, Therese; Mickey, Ruth M.; Katz, Jonathan; Mitchell, Brian R.

    2016-01-01

    Many anuran monitoring programs now include autonomous recording units (ARUs). These devices collect audio data for extended periods of time with little maintenance and at sites where traditional call surveys might be difficult. Additionally, computer software programs have grown increasingly accurate at automatically identifying the calls of species. However, increased automation may cause increased error. We collected 435 min of audio data with 2 types of ARUs at 10 wetland sites in Vermont and New York, USA, from 1 May to 1 July 2010. For each minute, we determined presence or absence of 4 anuran species (Hyla versicolor, Pseudacris crucifer, Anaxyrus americanus, and Lithobates clamitans) using 1) traditional human identification versus 2) computer-mediated identification with software package, Song Scope® (Wildlife Acoustics, Concord, MA). Detections were compared with a data set consisting of verified calls in order to quantify false positive, false negative, true positive, and true negative rates. Multinomial logistic regression analysis revealed a strong (P < 0.001) 3-way interaction between the ARU recorder type, identification method, and focal species, as well as a trend in the main effect of rain (P = 0.059). Overall, human surveyors had the lowest total error rate (<2%) compared with 18–31% total errors with automated methods. Total error rates varied by species, ranging from 4% for A. americanus to 26% for L. clamitans. The presence of rain may reduce false negative rates. For survey minutes where anurans were known to be calling, the odds of a false negative were increased when fewer individuals of the same species were calling.

  5. Evaluation of drug administration errors in a teaching hospital

    PubMed Central

    2012-01-01

    Background Medication errors can occur at any of the three steps of the medication use process: prescribing, dispensing and administration. We aimed to determine the incidence, type and clinical importance of drug administration errors and to identify risk factors. Methods Prospective study based on disguised observation technique in four wards in a teaching hospital in Paris, France (800 beds). A pharmacist accompanied nurses and witnessed the preparation and administration of drugs to all patients during the three drug rounds on each of six days per ward. Main outcomes were number, type and clinical importance of errors and associated risk factors. Drug administration error rate was calculated with and without wrong time errors. Relationship between the occurrence of errors and potential risk factors were investigated using logistic regression models with random effects. Results Twenty-eight nurses caring for 108 patients were observed. Among 1501 opportunities for error, 415 administrations (430 errors) with one or more errors were detected (27.6%). There were 312 wrong time errors, ten simultaneously with another type of error, resulting in an error rate without wrong time error of 7.5% (113/1501). The most frequently administered drugs were the cardiovascular drugs (425/1501, 28.3%). The highest risks of error in a drug administration were for dermatological drugs. No potentially life-threatening errors were witnessed and 6% of errors were classified as having a serious or significant impact on patients (mainly omission). In multivariate analysis, the occurrence of errors was associated with drug administration route, drug classification (ATC) and the number of patient under the nurse's care. Conclusion Medication administration errors are frequent. The identification of its determinants helps to undertake designed interventions. PMID:22409837

  6. Evaluation of drug administration errors in a teaching hospital.

    PubMed

    Berdot, Sarah; Sabatier, Brigitte; Gillaizeau, Florence; Caruba, Thibaut; Prognon, Patrice; Durieux, Pierre

    2012-03-12

    Medication errors can occur at any of the three steps of the medication use process: prescribing, dispensing and administration. We aimed to determine the incidence, type and clinical importance of drug administration errors and to identify risk factors. Prospective study based on disguised observation technique in four wards in a teaching hospital in Paris, France (800 beds). A pharmacist accompanied nurses and witnessed the preparation and administration of drugs to all patients during the three drug rounds on each of six days per ward. Main outcomes were number, type and clinical importance of errors and associated risk factors. Drug administration error rate was calculated with and without wrong time errors. Relationship between the occurrence of errors and potential risk factors were investigated using logistic regression models with random effects. Twenty-eight nurses caring for 108 patients were observed. Among 1501 opportunities for error, 415 administrations (430 errors) with one or more errors were detected (27.6%). There were 312 wrong time errors, ten simultaneously with another type of error, resulting in an error rate without wrong time error of 7.5% (113/1501). The most frequently administered drugs were the cardiovascular drugs (425/1501, 28.3%). The highest risks of error in a drug administration were for dermatological drugs. No potentially life-threatening errors were witnessed and 6% of errors were classified as having a serious or significant impact on patients (mainly omission). In multivariate analysis, the occurrence of errors was associated with drug administration route, drug classification (ATC) and the number of patient under the nurse's care. Medication administration errors are frequent. The identification of its determinants helps to undertake designed interventions.

  7. Quantifying the burden of opioid medication errors in adult oncology and palliative care settings: A systematic review.

    PubMed

    Heneka, Nicole; Shaw, Tim; Rowett, Debra; Phillips, Jane L

    2016-06-01

    Opioids are the primary pharmacological treatment for cancer pain and, in the palliative care setting, are routinely used to manage symptoms at the end of life. Opioids are one of the most frequently reported drug classes in medication errors causing patient harm. Despite their widespread use, little is known about the incidence and impact of opioid medication errors in oncology and palliative care settings. To determine the incidence, types and impact of reported opioid medication errors in adult oncology and palliative care patient settings. A systematic review. Five electronic databases and the grey literature were searched from 1980 to August 2014. Empirical studies published in English, reporting data on opioid medication error incidence, types or patient impact, within adult oncology and/or palliative care services, were included. Popay's narrative synthesis approach was used to analyse data. Five empirical studies were included in this review. Opioid error incidence rate was difficult to ascertain as each study focussed on a single narrow area of error. The predominant error type related to deviation from opioid prescribing guidelines, such as incorrect dosing intervals. None of the included studies reported the degree of patient harm resulting from opioid errors. This review has highlighted the paucity of the literature examining opioid error incidence, types and patient impact in adult oncology and palliative care settings. Defining, identifying and quantifying error reporting practices for these populations should be an essential component of future oncology and palliative care quality and safety initiatives. © The Author(s) 2015.

  8. Fat and Sugar Metabolism During Exercise in Patients With Metabolic Myopathy

    ClinicalTrials.gov

    2017-08-31

    Metabolism, Inborn Errors; Lipid Metabolism, Inborn Errors; Carbohydrate Metabolism, Inborn Errors; Long-Chain 3-Hydroxyacyl-CoA Dehydrogenase Deficiency; Glycogenin-1 Deficiency (Glycogen Storage Disease Type XV); Carnitine Palmitoyl Transferase 2 Deficiency; VLCAD Deficiency; Medium-chain Acyl-CoA Dehydrogenase Deficiency; Multiple Acyl-CoA Dehydrogenase Deficiency; Carnitine Transporter Deficiency; Neutral Lipid Storage Disease; Glycogen Storage Disease Type II; Glycogen Storage Disease Type III; Glycogen Storage Disease Type IV; Glycogen Storage Disease Type V; Muscle Phosphofructokinase Deficiency; Phosphoglucomutase 1 Deficiency; Phosphoglycerate Mutase Deficiency; Phosphoglycerate Kinase Deficiency; Phosphorylase Kinase Deficiency; Beta Enolase Deficiency; Lactate Dehydrogenase Deficiency; Glycogen Synthase Deficiency

  9. Iatrogenic Errors during Root Canal Instrumentation Performed by Dental Students

    PubMed Central

    Hendi, Seyedeh Sareh; Karkehabadi, Hamed; Eskandarloo, Amir

    2018-01-01

    Introduction: The present study was set to investigate the training quality and its association with the quality of root canal therapy performed by fifth year dentistry students. Methods and Materials: A total number of 432 records of endodontic treatment performed by fifth year dentistry students were qualified to be further investigated. Radiographs were assessed by two independent endodontists. Apical transportation, apical perforation, gouging, ledge formation, and the quality of temporary restoration were error types investigated in the present study. Results: the prevalence of apical transportation, ledge formation, and apical perforation errors were significantly higher in molars in comparison with other types of teeth. The most prevalent type of error was the apical transportation, which was significantly higher in mandibular teeth. There was no significant differences among teeth in terms of other types of errors. Conclusion: The quality of training provided for dentistry students should be improved and endodontic curriculum should be modified. PMID:29692848

  10. Single-Event Upset Characterization of Common First- and Second-Order All-Digital Phase-Locked Loops

    NASA Astrophysics Data System (ADS)

    Chen, Y. P.; Massengill, L. W.; Kauppila, J. S.; Bhuva, B. L.; Holman, W. T.; Loveless, T. D.

    2017-08-01

    The single-event upset (SEU) vulnerability of common first- and second-order all-digital-phase-locked loops (ADPLLs) is investigated through field-programmable gate array-based fault injection experiments. SEUs in the highest order pole of the loop filter and fraction-based phase detectors (PDs) may result in the worst case error response, i.e., limit cycle errors, often requiring system restart. SEUs in integer-based linear PDs may result in loss-of-lock errors, while SEUs in bang-bang PDs only result in temporary-frequency errors. ADPLLs with the same frequency tuning range but fewer bits in the control word exhibit better overall SEU performance.

  11. Impact of a pharmacy student-driven medication delivery service at hospital discharge.

    PubMed

    Rogers, Jacalyn; Pai, Vinita; Merandi, Jenna; Catt, Char; Cole, Justin; Yarosz, Shannon; Wehr, Allison; Durkin, Kayla; Kaczor, Chet

    2017-03-01

    A pharmacy student-driven discharge service developed for patients to reduce the number of medication errors on after-visit summaries (AVSs) is discussed. An audit of AVS documents was conducted before the implementation period (September 3 to October 23, 2013) to identify medication errors. As part of the audit, a pharmacist review of the discharge medication list was completed to determine the number and types of errors that occurred. A student-driven discharge service with AVS review was developed in collaboration with nursing and medical residents. Students reviewed a patient's AVS, delivered the discharge prescriptions to bedside, and conducted medication reconciliation with the patient and family. The AVS audit was conducted after implementation of these services to assess the impact on medication errors. It was observed that 72% (108 of 150) of AVSs contained at least 1 error before discharge and AVS review. During the 2-month postimplementation period (September 3 to October 23, 2014), this decreased to 27% (34 of 127), resulting in a 52% absolute reduction in the number of AVSs with at least 1 medication error ( p < 0.0001). The most common error was as-needed medication with no indication, which decreased from 55% in the preimplementation audit to 16% in the postimplementation audit. Prescribing to Nationwide Children's Hospital's outpatient pharmacy increased from 57% in the preimplementation period to 73% in the postimplementation period for the general pediatrics service. A pharmacy student-driven discharge and medication delivery service reduced the number of AVSs and increased access to medications for patients. Copyright © 2017 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  12. Comparing interval estimates for small sample ordinal CFA models

    PubMed Central

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research. PMID:26579002

  13. Comparing interval estimates for small sample ordinal CFA models.

    PubMed

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research.

  14. Magnetometer-augmented IMU simulator: in-depth elaboration.

    PubMed

    Brunner, Thomas; Lauffenburger, Jean-Philippe; Changey, Sébastien; Basset, Michel

    2015-03-04

    The location of objects is a growing research topic due, for instance, to the expansion of civil drones or intelligent vehicles. This expansion was made possible through the development of microelectromechanical systems (MEMS), inexpensive and miniaturized inertial sensors. In this context, this article describes the development of a new simulator which generates sensor measurements, giving a specific input trajectory. This will allow the comparison of pose estimation algorithms. To develop this simulator, the measurement equations of every type of sensor have to be analytically determined. To achieve this objective, classical kinematic equations are used for the more common sensors, i.e., accelerometers and rate gyroscopes. As nowadays, the MEMS inertial measurement units (IMUs) are generally magnetometer-augmented, an absolute world magnetic model is implemented. After the determination of the perfect measurement (through the error-free sensor models), realistic error models are developed to simulate real IMU behavior. Finally, the developed simulator is subjected to different validation tests.

  15. Magnetometer-Augmented IMU Simulator: In-Depth Elaboration

    PubMed Central

    Brunner, Thomas; Lauffenburger, Jean-Philippe; Changey, Sébastien; Basset, Michel

    2015-01-01

    The location of objects is a growing research topic due, for instance, to the expansion of civil drones or intelligent vehicles. This expansion was made possible through the development of microelectromechanical systems (MEMS), inexpensive and miniaturized inertial sensors. In this context, this article describes the development of a new simulator which generates sensor measurements, giving a specific input trajectory. This will allow the comparison of pose estimation algorithms. To develop this simulator, the measurement equations of every type of sensor have to be analytically determined. To achieve this objective, classical kinematic equations are used for the more common sensors, i.e., accelerometers and rate gyroscopes. As nowadays, the MEMS inertial measurement units (IMUs) are generally magnetometer-augmented, an absolute world magnetic model is implemented. After the determination of the perfect measurement (through the error-free sensor models), realistic error models are developed to simulate real IMU behavior. Finally, the developed simulator is subjected to different validation tests. PMID:25746095

  16. Mixed-effects location and scale Tobit joint models for heterogeneous longitudinal data with skewness, detection limits, and measurement errors.

    PubMed

    Lu, Tao

    2017-01-01

    The joint modeling of mean and variance for longitudinal data is an active research area. This type of model has the advantage of accounting for heteroscedasticity commonly observed in between and within subject variations. Most of researches focus on improving the estimating efficiency but ignore many data features frequently encountered in practice. In this article, we develop a mixed-effects location scale joint model that concurrently accounts for longitudinal data with multiple features. Specifically, our joint model handles heterogeneity, skewness, limit of detection, measurement errors in covariates which are typically observed in the collection of longitudinal data from many studies. We employ a Bayesian approach for making inference on the joint model. The proposed model and method are applied to an AIDS study. Simulation studies are performed to assess the performance of the proposed method. Alternative models under different conditions are compared.

  17. Parameter estimation for slit-type scanning sensors

    NASA Technical Reports Server (NTRS)

    Fowler, J. W.; Rolfe, E. G.

    1981-01-01

    The Infrared Astronomical Satellite, scheduled for launch into a 900 km near-polar orbit in August 1982, will perform an infrared point source survey by scanning the sky with slit-type sensors. The description of position information is shown to require the use of a non-Gaussian random variable. Methods are described for deciding whether separate detections stem from a single common source, and a formulism is developed for the scan-to-scan problems of identifying multiple sightings of inertially fixed point sources for combining their individual measurements into a refined estimate. Several cases are given where the general theory yields results which are quite different from the corresponding Gaussian applications, showing that argument by Gaussian analogy would lead to error.

  18. Impact of electronic chemotherapy order forms on prescribing errors at an urban medical center: results from an interrupted time-series analysis.

    PubMed

    Elsaid, K; Truong, T; Monckeberg, M; McCarthy, H; Butera, J; Collins, C

    2013-12-01

    To evaluate the impact of electronic standardized chemotherapy templates on incidence and types of prescribing errors. A quasi-experimental interrupted time series with segmented regression. A 700-bed multidisciplinary tertiary care hospital with an ambulatory cancer center. A multidisciplinary team including oncology physicians, nurses, pharmacists and information technologists. Standardized, regimen-specific, chemotherapy prescribing forms were developed and implemented over a 32-month period. Trend of monthly prevented prescribing errors per 1000 chemotherapy doses during the pre-implementation phase (30 months), immediate change in the error rate from pre-implementation to implementation and trend of errors during the implementation phase. Errors were analyzed according to their types: errors in communication or transcription, errors in dosing calculation and errors in regimen frequency or treatment duration. Relative risk (RR) of errors in the post-implementation phase (28 months) compared with the pre-implementation phase was computed with 95% confidence interval (CI). Baseline monthly error rate was stable with 16.7 prevented errors per 1000 chemotherapy doses. A 30% reduction in prescribing errors was observed with initiating the intervention. With implementation, a negative change in the slope of prescribing errors was observed (coefficient = -0.338; 95% CI: -0.612 to -0.064). The estimated RR of transcription errors was 0.74; 95% CI (0.59-0.92). The estimated RR of dosing calculation errors was 0.06; 95% CI (0.03-0.10). The estimated RR of chemotherapy frequency/duration errors was 0.51; 95% CI (0.42-0.62). Implementing standardized chemotherapy-prescribing templates significantly reduced all types of prescribing errors and improved chemotherapy safety.

  19. A Study of Assessing Errors and Completeness of Research Application Forms Submitted to Instituitional Ethics Committee (IEC) of a Tertiary Care Hospital.

    PubMed

    Shah, Pruthak C; Panchasara, Ashwin K; Barvaliya, Manish J; Tripathi, C B

    2016-09-01

    Application form of research work is an essential requirement which is required to be submitted along with the research proposal to the Ethics Committee (EC). To check the completeness and to find the errors in application forms submitted to the EC of a tertiary care hospital. The application forms of research projects submitted to the Institutional Review Board (IRB), Government Medical College, Bhavnagar, Gujarat, India from January 2014 to June 2015 were analysed for completeness and errors, with respect to the following - type of study, information about study investigators, sample size, study participants, title of the studies, signatures of all investigators, regulatory approval, recruitment procedure, compensation to study participants, informed consent process, information about sponsor, declaration of conflict of interest, plans for storage and maintenance of data, patient information sheet, informed consent forms and study related documents. Total 100 application forms were analysed. Among them, 98 were academic and 2 were industrial studies. Majority of academic studies were of basic science type. In 63.26% studies, type of study was not mentioned in title. Age group of subjects was not mentioned in 8.16% application forms. In 34.6% informed consent, benefits of the study were not mentioned. Signature of investigators/co-investigators/Head of the Department was missing in 3.06% cases. Our study recommends that the efficiency and speed of review will increase if investigator will increase vigilance regarding filling of application forms. Regular meetings will be helpful to solve the problems related to content of application forms. The uniformity in functioning of EC can be achieved if common application form for all ECs is there.

  20. Defining quality metrics and improving safety and outcome in allergy care.

    PubMed

    Lee, Stella; Stachler, Robert J; Ferguson, Berrylin J

    2014-04-01

    The delivery of allergy immunotherapy in the otolaryngology office is variable and lacks standardization. Quality metrics encompasses the measurement of factors associated with good patient-centered care. These factors have yet to be defined in the delivery of allergy immunotherapy. We developed and applied quality metrics to 6 allergy practices affiliated with an academic otolaryngic allergy center. This work was conducted at a tertiary academic center providing care to over 1500 patients. We evaluated methods and variability between 6 sites. Tracking of errors and anaphylaxis was initiated across all sites. A nationwide survey of academic and private allergists was used to collect data on current practice and use of quality metrics. The most common types of errors recorded were patient identification errors (n = 4), followed by vial mixing errors (n = 3), and dosing errors (n = 2). There were 7 episodes of anaphylaxis of which 2 were secondary to dosing errors for a rate of 0.01% or 1 in every 10,000 injection visits/year. Site visits showed that 86% of key safety measures were followed. Analysis of nationwide survey responses revealed that quality metrics are still not well defined by either medical or otolaryngic allergy practices. Academic practices were statistically more likely to use quality metrics (p = 0.021) and perform systems reviews and audits in comparison to private practices (p = 0.005). Quality metrics in allergy delivery can help improve safety and quality care. These metrics need to be further defined by otolaryngic allergists in the changing health care environment. © 2014 ARS-AAOA, LLC.

  1. Quantifying the Performance of P-Type Transparent Conducting Oxides by Experimental Methods

    PubMed Central

    Fleischer, Karsten; Norton, Emma; Mullarkey, Daragh; Caffrey, David; Shvets, Igor V.

    2017-01-01

    Screening for potential new materials with experimental and theoretical methods has led to the discovery of many promising candidate materials for p-type transparent conducting oxides. It is difficult to reliably assess a good p-type transparent conducting oxide (TCO) from limited information available at an early experimental stage. In this paper we discuss the influence of sample thickness on simple transmission measurements and how the sample thickness can skew the commonly used figure of merit of TCOs and their estimated band gap. We discuss this using copper-deficient CuCrO2 as an example, as it was already shown to be a good p-type TCO grown at low temperatures. We outline a modified figure of merit reducing thickness-dependent errors, as well as how modern ab initio screening methods can be used to augment experimental methods to assess new materials for potential applications as p-type TCOs, p-channel transparent thin film transistors, and selective contacts in solar cells. PMID:28862695

  2. Secondary School Teachers' Pedagogical Content Knowledge of Some Common Student Errors and Misconceptions in Sets

    ERIC Educational Resources Information Center

    Kolitsoe Moru, Eunice; Qhobela, Makomosela

    2013-01-01

    The study investigated teachers' pedagogical content knowledge of common students' errors and misconceptions in sets. Five mathematics teachers from one Lesotho secondary school were the sample of the study. Questionnaires and interviews were used for data collection. The results show that teachers were able to identify the following students'…

  3. Addressing Common Student Errors with Classroom Voting in Multivariable Calculus

    ERIC Educational Resources Information Center

    Cline, Kelly; Parker, Mark; Zullo, Holly; Stewart, Ann

    2012-01-01

    One technique for identifying and addressing common student errors is the method of classroom voting, in which the instructor presents a multiple-choice question to the class, and after a few minutes for consideration and small group discussion, each student votes on the correct answer, often using a hand-held electronic clicker. If a large number…

  4. The Nature of Error in Adolescent Student Writing

    ERIC Educational Resources Information Center

    Wilcox, Kristen Campbell; Yagelski, Robert; Yu, Fang

    2014-01-01

    This study examined the nature and frequency of error in high school native English speaker (L1) and English learner (L2) writing. Four main research questions were addressed: Are there significant differences in students' error rates in English language arts (ELA) and social studies? Do the most common errors made by students differ in ELA…

  5. Comparing Measurement Error between Two Different Methods of Measurement of Various Magnitudes

    ERIC Educational Resources Information Center

    Zavorsky, Gerald S.

    2010-01-01

    Measurement error is a common problem in several fields of research such as medicine, physiology, and exercise science. The standard deviation of repeated measurements on the same person is the measurement error. One way of presenting measurement error is called the repeatability, which is 2.77 multiplied by the within subject standard deviation.…

  6. Prevalence and predictors of antibiotic prescription errors in an emergency department, Central Saudi Arabia.

    PubMed

    Alanazi, Menyfah Q; Al-Jeraisy, Majed I; Salam, Mahmoud

    2015-01-01

    Inappropriate antibiotic (ATB) prescriptions are a threat to patients, leading to adverse drug reactions, bacterial resistance, and subsequently, elevated hospital costs. Our aim was to evaluate ATB prescriptions in an emergency department of a tertiary care facility. A cross-sectional study was conducted by reviewing charts of patients complaining of infections. Patient characteristics (age, sex, weight, allergy, infection type) and prescription characteristics (class, dose, frequency, duration) were evaluated for appropriateness based on the AHFS Drug Information and the Drug Information Handbook. Descriptive and analytic statistics were applied. Sample with equal sex distribution constituted of 5,752 cases: adults (≥15 years) =61% and pediatrics (<15 years) =39%. Around 55% complained of respiratory tract infections, 25% urinary tract infections (UTIs), and 20% others. Broad-spectrum coverage ATBs were prescribed for 76% of the cases. Before the prescription, 82% of pediatrics had their weight taken, while 18% had their weight estimated. Allergy checking was done in 8% only. Prevalence of inappropriate ATB prescriptions with at least one type of error was 46.2% (pediatrics =58% and adults =39%). Errors were in ATB selection (2%), dosage (22%), frequency (4%), and duration (29%). Dosage and duration errors were significantly predominant among pediatrics (P<0.001 and P<0.0001, respectively). Selection error was higher among adults (P=0.001). Age stratification and binary logistic regression were applied. Significant predictors of inappropriate prescriptions were associated with: 1) cephalosporin prescriptions (adults: P<0.001, adjusted odds ratio [adj OR] =3.31) (pediatrics: P<0.001, adj OR =4.12) compared to penicillin; 2) UTIs (adults: P<0.001, adj OR =2.78) (pediatrics: P=0.039, adj OR =0.73) compared to respiratory tract infections; 3) obtaining weight for pediatrics before the prescription of ATB (P<0.001, adj OR =1.83) compared to those whose weight was estimated; and 4) broad-spectrum ATBs in adults (P=0.002, adj OR =0.67). Prevalence of ATB prescription errors in this emergency department was generally high and was particularly common with cephalosporin, narrow-spectrum ATBs, and UTI infections.

  7. A Rasch Perspective

    ERIC Educational Resources Information Center

    Schumacker, Randall E.; Smith, Everett V., Jr.

    2007-01-01

    Measurement error is a common theme in classical measurement models used in testing and assessment. In classical measurement models, the definition of measurement error and the subsequent reliability coefficients differ on the basis of the test administration design. Internal consistency reliability specifies error due primarily to poor item…

  8. Spelling Errors of Dyslexic Children in Bosnian Language with Transparent Orthography

    ERIC Educational Resources Information Center

    Duranovic, Mirela

    2017-01-01

    The purpose of this study was to explore the nature of spelling errors made by children with dyslexia in Bosnian language with transparent orthography. Three main error categories were distinguished: phonological, orthographic, and grammatical errors. An analysis of error type showed 86% of phonological errors, 10% of orthographic errors, and 4%…

  9. Fault detection and isolation in motion monitoring system.

    PubMed

    Kim, Duk-Jin; Suk, Myoung Hoon; Prabhakaran, B

    2012-01-01

    Pervasive computing becomes very active research field these days. A watch that can trace human movement to record motion boundary as well as to study of finding social life pattern by one's localized visiting area. Pervasive computing also helps patient monitoring. A daily monitoring system helps longitudinal study of patient monitoring such as Alzheimer's and Parkinson's or obesity monitoring. Due to the nature of monitoring sensor (on-body wireless sensor), however, signal noise or faulty sensors errors can be present at any time. Many research works have addressed these problems any with a large amount of sensor deployment. In this paper, we present the faulty sensor detection and isolation using only two on-body sensors. We have been investigating three different types of sensor errors: the SHORT error, the CONSTANT error, and the NOISY SENSOR error (see more details on section V). Our experimental results show that the success rate of isolating faulty signals are an average of over 91.5% on fault type 1, over 92% on fault type 2, and over 99% on fault type 3 with the fault prior of 30% sensor errors.

  10. Assumption-free estimation of the genetic contribution to refractive error across childhood.

    PubMed

    Guggenheim, Jeremy A; St Pourcain, Beate; McMahon, George; Timpson, Nicholas J; Evans, David M; Williams, Cathy

    2015-01-01

    Studies in relatives have generally yielded high heritability estimates for refractive error: twins 75-90%, families 15-70%. However, because related individuals often share a common environment, these estimates are inflated (via misallocation of unique/common environment variance). We calculated a lower-bound heritability estimate for refractive error free from such bias. Between the ages 7 and 15 years, participants in the Avon Longitudinal Study of Parents and Children (ALSPAC) underwent non-cycloplegic autorefraction at regular research clinics. At each age, an estimate of the variance in refractive error explained by single nucleotide polymorphism (SNP) genetic variants was calculated using genome-wide complex trait analysis (GCTA) using high-density genome-wide SNP genotype information (minimum N at each age=3,404). The variance in refractive error explained by the SNPs ("SNP heritability") was stable over childhood: Across age 7-15 years, SNP heritability averaged 0.28 (SE=0.08, p<0.001). The genetic correlation for refractive error between visits varied from 0.77 to 1.00 (all p<0.001) demonstrating that a common set of SNPs was responsible for the genetic contribution to refractive error across this period of childhood. Simulations suggested lack of cycloplegia during autorefraction led to a small underestimation of SNP heritability (adjusted SNP heritability=0.35; SE=0.09). To put these results in context, the variance in refractive error explained (or predicted) by the time participants spent outdoors was <0.005 and by the time spent reading was <0.01, based on a parental questionnaire completed when the child was aged 8-9 years old. Genetic variation captured by common SNPs explained approximately 35% of the variation in refractive error between unrelated subjects. This value sets an upper limit for predicting refractive error using existing SNP genotyping arrays, although higher-density genotyping in larger samples and inclusion of interaction effects is expected to raise this figure toward twin- and family-based heritability estimates. The same SNPs influenced refractive error across much of childhood. Notwithstanding the strong evidence of association between time outdoors and myopia, and time reading and myopia, less than 1% of the variance in myopia at age 15 was explained by crude measures of these two risk factors, indicating that their effects may be limited, at least when averaged over the whole population.

  11. Resampling-Based Empirical Bayes Multiple Testing Procedures for Controlling Generalized Tail Probability and Expected Value Error Rates: Focus on the False Discovery Rate and Simulation Study

    PubMed Central

    Dudoit, Sandrine; Gilbert, Houston N.; van der Laan, Mark J.

    2014-01-01

    Summary This article proposes resampling-based empirical Bayes multiple testing procedures for controlling a broad class of Type I error rates, defined as generalized tail probability (gTP) error rates, gTP(q, g) = Pr(g(Vn, Sn) > q), and generalized expected value (gEV) error rates, gEV(g) = E[g(Vn, Sn)], for arbitrary functions g(Vn, Sn) of the numbers of false positives Vn and true positives Sn. Of particular interest are error rates based on the proportion g(Vn, Sn) = Vn/(Vn + Sn) of Type I errors among the rejected hypotheses, such as the false discovery rate (FDR), FDR = E[Vn/(Vn + Sn)]. The proposed procedures offer several advantages over existing methods. They provide Type I error control for general data generating distributions, with arbitrary dependence structures among variables. Gains in power are achieved by deriving rejection regions based on guessed sets of true null hypotheses and null test statistics randomly sampled from joint distributions that account for the dependence structure of the data. The Type I error and power properties of an FDR-controlling version of the resampling-based empirical Bayes approach are investigated and compared to those of widely-used FDR-controlling linear step-up procedures in a simulation study. The Type I error and power trade-off achieved by the empirical Bayes procedures under a variety of testing scenarios allows this approach to be competitive with or outperform the Storey and Tibshirani (2003) linear step-up procedure, as an alternative to the classical Benjamini and Hochberg (1995) procedure. PMID:18932138

  12. Adjustment of Measurements with Multiplicative Errors: Error Analysis, Estimates of the Variance of Unit Weight, and Effect on Volume Estimation from LiDAR-Type Digital Elevation Models

    PubMed Central

    Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan

    2014-01-01

    Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880

  13. The ophthalmological course of Usher syndrome type III.

    PubMed

    Pakarinen, L; Tuppurainen, K; Laippala, P; Mäntyjärvi, M; Puhakka, H

    Usher syndrome is a recessive hereditary disease group with clinical and genetical heterogeneity leading to handicapped hearing and visual loss until middle age. It is the most common cause for deaf-blindness. Three distinct phenotypes and five distinct genotypes are already known. In Finland the distribution of known Usher types is different than elsewhere. Usher syndrome type III (USH3) is common in Finland and it is thought to include 40% of patients. Progressive hearing loss is characteristic of USH3. Elsewhere USH3 has been regarded as a rarity covering only several percent of the whole Usher population. The aim of this paper is to describe, for the first time, the course of visual handicap and typical refractive errors in USH3 and compare it with other USH types. From a total patient sample consisting of 229 Finnish USH patients, 200 patients' visual findings were analyzed in a multicenter retrospective follow-up study. The average progress rate during a 10-year follow-up period in different USH types was similar. The essential progress occurred below the age of 40 and was continuous up to that age. Visual acuity dropped below 0.05 (severely impaired) at the age of 37 and the visual fields were of tubular shape without any peripheric islands at the average age of 30. Clinically significant hypermetropia with astigmatism seems to be a pathognomonic clinical sign of USH3.

  14. Accounting for measurement error in biomarker data and misclassification of subtypes in the analysis of tumor data.

    PubMed

    Nevo, Daniel; Zucker, David M; Tamimi, Rulla M; Wang, Molin

    2016-12-30

    A common paradigm in dealing with heterogeneity across tumors in cancer analysis is to cluster the tumors into subtypes using marker data on the tumor, and then to analyze each of the clusters separately. A more specific target is to investigate the association between risk factors and specific subtypes and to use the results for personalized preventive treatment. This task is usually carried out in two steps-clustering and risk factor assessment. However, two sources of measurement error arise in these problems. The first is the measurement error in the biomarker values. The second is the misclassification error when assigning observations to clusters. We consider the case with a specified set of relevant markers and propose a unified single-likelihood approach for normally distributed biomarkers. As an alternative, we consider a two-step procedure with the tumor type misclassification error taken into account in the second-step risk factor analysis. We describe our method for binary data and also for survival analysis data using a modified version of the Cox model. We present asymptotic theory for the proposed estimators. Simulation results indicate that our methods significantly lower the bias with a small price being paid in terms of variance. We present an analysis of breast cancer data from the Nurses' Health Study to demonstrate the utility of our method. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  15. Snake River Plain Geothermal Play Fairway Analysis - Phase 1 Raster Files

    DOE Data Explorer

    John Shervais

    2015-10-09

    Snake River Plain Play Fairway Analysis - Phase 1 CRS Raster Files. This dataset contains raster files created in ArcGIS. These raster images depict Common Risk Segment (CRS) maps for HEAT, PERMEABILITY, AND SEAL, as well as selected maps of Evidence Layers. These evidence layers consist of either Bayesian krige functions or kernel density functions, and include: (1) HEAT: Heat flow (Bayesian krige map), Heat flow standard error on the krige function (data confidence), volcanic vent distribution as function of age and size, groundwater temperature (equivalue interval and natural breaks bins), and groundwater T standard error. (2) PERMEABILTY: Fault and lineament maps, both as mapped and as kernel density functions, processed for both dilational tendency (TD) and slip tendency (ST), along with data confidence maps for each data type. Data types include mapped surface faults from USGS and Idaho Geological Survey data bases, as well as unpublished mapping; lineations derived from maximum gradients in magnetic, deep gravity, and intermediate depth gravity anomalies. (3) SEAL: Seal maps based on presence and thickness of lacustrine sediments and base of SRP aquifer. Raster size is 2 km. All files generated in ArcGIS.

  16. Fuzzy Control of Robotic Arm

    NASA Astrophysics Data System (ADS)

    Lin, Kyaw Kyaw; Soe, Aung Kyaw; Thu, Theint Theint

    2008-10-01

    This research work investigates a Self-Tuning Proportional Derivative (PD) type Fuzzy Logic Controller (STPDFLC) for a two link robot system. The proposed scheme adjusts on-line the output Scaling Factor (SF) by fuzzy rules according to the current trend of the robot. The rule base for tuning the output scaling factor is defined on the error (e) and change in error (de). The scheme is also based on the fact that the controller always tries to manipulate the process input. The rules are in the familiar if-then format. All membership functions for controller inputs (e and de) and controller output (UN) are defined on the common interval [-1,1]; whereas the membership functions for the gain updating factor (α) is defined on [0,1]. There are various methods to calculate the crisp output of the system. Center of Gravity (COG) method is used in this application due to better results it gives. Performances of the proposed STPDFLC are compared with those of their corresponding PD-type conventional Fuzzy Logic Controller (PDFLC). The proposed scheme shows a remarkably improved performance over its conventional counterpart especially under parameters variation (payload). The two-link results of analysis are simulated. These simulation results are illustrated by using MATLAB® programming.

  17. Consideration of species community composition in statistical ...

    EPA Pesticide Factsheets

    Diseases are increasing in marine ecosystems, and these increases have been attributed to a number of environmental factors including climate change, pollution, and overfishing. However, many studies pool disease prevalence into taxonomic groups, disregarding host species composition when comparing sites or assessing environmental impacts on patterns of disease presence. We used simulated data under a known environmental effect to assess the ability of standard statistical methods (binomial and linear regression, ANOVA) to detect a significant environmental effect on pooled disease prevalence with varying species abundance distributions and relative susceptibilities to disease. When one species was more susceptible to a disease and both species only partially overlapped in their distributions, models tended to produce a greater number of false positives (Type I error). Differences in disease risk between regions or along an environmental gradient tended to be underestimated, or even in the wrong direction, when highly susceptible taxa had reduced abundances in impacted sites, a situation likely to be common in nature. Including relative abundance as an additional variable in regressions improved model accuracy, but tended to be conservative, producing more false negatives (Type II error) when species abundance was strongly correlated with the environmental effect. Investigators should be cautious of underlying assumptions of species similarity in susceptib

  18. Small sample mediation testing: misplaced confidence in bootstrapped confidence intervals.

    PubMed

    Koopman, Joel; Howe, Michael; Hollenbeck, John R; Sin, Hock-Peng

    2015-01-01

    Bootstrapping is an analytical tool commonly used in psychology to test the statistical significance of the indirect effect in mediation models. Bootstrapping proponents have particularly advocated for its use for samples of 20-80 cases. This advocacy has been heeded, especially in the Journal of Applied Psychology, as researchers are increasingly utilizing bootstrapping to test mediation with samples in this range. We discuss reasons to be concerned with this escalation, and in a simulation study focused specifically on this range of sample sizes, we demonstrate not only that bootstrapping has insufficient statistical power to provide a rigorous hypothesis test in most conditions but also that bootstrapping has a tendency to exhibit an inflated Type I error rate. We then extend our simulations to investigate an alternative empirical resampling method as well as a Bayesian approach and demonstrate that they exhibit comparable statistical power to bootstrapping in small samples without the associated inflated Type I error. Implications for researchers testing mediation hypotheses in small samples are presented. For researchers wishing to use these methods in their own research, we have provided R syntax in the online supplemental materials. (c) 2015 APA, all rights reserved.

  19. Evaluating significance in linear mixed-effects models in R.

    PubMed

    Luke, Steven G

    2017-08-01

    Mixed-effects models are being used ever more frequently in the analysis of experimental data. However, in the lme4 package in R the standards for evaluating significance of fixed effects in these models (i.e., obtaining p-values) are somewhat vague. There are good reasons for this, but as researchers who are using these models are required in many cases to report p-values, some method for evaluating the significance of the model output is needed. This paper reports the results of simulations showing that the two most common methods for evaluating significance, using likelihood ratio tests and applying the z distribution to the Wald t values from the model output (t-as-z), are somewhat anti-conservative, especially for smaller sample sizes. Other methods for evaluating significance, including parametric bootstrapping and the Kenward-Roger and Satterthwaite approximations for degrees of freedom, were also evaluated. The results of these simulations suggest that Type 1 error rates are closest to .05 when models are fitted using REML and p-values are derived using the Kenward-Roger or Satterthwaite approximations, as these approximations both produced acceptable Type 1 error rates even for smaller samples.

  20. Refractive error among the elderly in rural Southern Harbin, China.

    PubMed

    Li, Zhijian; Sun, Dianjun; Cuj, Hao; Zhang, Liqiong; Lju, Ping; Yang, Hongbin; Baj, Jie

    2009-01-01

    To estimate the prevalence and associated factors of refractive errors among the elderly in a rural area of Southern Harbin, China. Five thousand and fifty seven subjects (age > or = 50 years) were enumerated for a population-based study. All participants underwent complete ophthalmic evaluation. Refraction was performed by ophthalmic personnel trained in the study procedures. Myopia was defined as spherical equivalent worse than -0.50 diopters (D) and hyperopia as spherical equivalent worse than +0.50 D. Astigmatism was defined as a cylindrical error worse than 0.75D. Association of refractive errors with age, sex, and education were analyzed. Of the 5,057 responders (91.0%), 4,979 were eligible. The mean age was 60.5 (range 50-96) years old. The prevalence of myopia was 9.5% (95% confidence interval [CI], 8.5-10.1) and of hyperopia was 8.9% (95% CI, 7.9-9.5). Astigmatism was evident in 7.6% of the subjects. Myopia, hyperopia and astigmatism increased with increasing age (p<0.001, respectively). Myopia and astigmatism were more common in males, whereas hyperopia was more common in females. We also found that prevalence of refractive error weas associated with education. Myopia was more common in those with higher degrees of education, whereas hyperopia and astigmatism were more common in those with no formal education. This report has provided details of the refractive status in a rural population of Harbin. The prevalence of refractive errors in this population is lower than those reported in other regions of the world.

  1. Evaluating statistical approaches to leverage large clinical datasets for uncovering therapeutic and adverse medication effects.

    PubMed

    Choi, Leena; Carroll, Robert J; Beck, Cole; Mosley, Jonathan D; Roden, Dan M; Denny, Joshua C; Van Driest, Sara L

    2018-04-18

    Phenome-wide association studies (PheWAS) have been used to discover many genotype-phenotype relationships and have the potential to identify therapeutic and adverse drug outcomes using longitudinal data within electronic health records (EHRs). However, the statistical methods for PheWAS applied to longitudinal EHR medication data have not been established. In this study, we developed methods to address two challenges faced with reuse of EHR for this purpose: confounding by indication, and low exposure and event rates. We used Monte Carlo simulation to assess propensity score (PS) methods, focusing on two of the most commonly used methods, PS matching and PS adjustment, to address confounding by indication. We also compared two logistic regression approaches (the default of Wald vs. Firth's penalized maximum likelihood, PML) to address complete separation due to sparse data with low exposure and event rates. PS adjustment resulted in greater power than propensity score matching, while controlling Type I error at 0.05. The PML method provided reasonable p-values, even in cases with complete separation, with well controlled Type I error rates. Using PS adjustment and the PML method, we identify novel latent drug effects in pediatric patients exposed to two common antibiotic drugs, ampicillin and gentamicin. R packages PheWAS and EHR are available at https://github.com/PheWAS/PheWAS and at CRAN (https://www.r-project.org/), respectively. The R script for data processing and the main analysis is available at https://github.com/choileena/EHR. leena.choi@vanderbilt.edu. Supplementary data are available at Bioinformatics online.

  2. Error-Free Text Typing Performance of an Inductive Intra-Oral Tongue Computer Interface for Severely Disabled Individuals.

    PubMed

    Andreasen Struijk, Lotte N S; Bentsen, Bo; Gaihede, Michael; Lontis, Eugen R

    2017-11-01

    For severely paralyzed individuals, alternative computer interfaces are becoming increasingly essential for everyday life as social and vocational activities are facilitated by information technology and as the environment becomes more automatic and remotely controllable. Tongue computer interfaces have proven to be desirable by the users partly due to their high degree of aesthetic acceptability, but so far the mature systems have shown a relatively low error-free text typing efficiency. This paper evaluated the intra-oral inductive tongue computer interface (ITCI) in its intended use: Error-free text typing in a generally available text editing system, Word. Individuals with tetraplegia and able bodied individuals used the ITCI for typing using a MATLAB interface and for Word typing for 4 to 5 experimental days, and the results showed an average error-free text typing rate in Word of 11.6 correct characters/min across all participants and of 15.5 correct characters/min for participants familiar with tongue piercings. Improvements in typing rates between the sessions suggest that typing ratescan be improved further through long-term use of the ITCI.

  3. Memory and the Moses illusion: failures to detect contradictions with stored knowledge yield negative memorial consequences.

    PubMed

    Bottoms, Hayden C; Eslick, Andrea N; Marsh, Elizabeth J

    2010-08-01

    Although contradictions with stored knowledge are common in daily life, people often fail to notice them. For example, in the Moses illusion, participants fail to notice errors in questions such as "How many animals of each kind did Moses take on the Ark?" despite later showing knowledge that the Biblical reference is to Noah, not Moses. We examined whether error prevalence affected participants' ability to detect distortions in questions, and whether this in turn had memorial consequences. Many of the errors were overlooked, but participants were better able to catch them when they were more common. More generally, the failure to detect errors had negative memorial consequences, increasing the likelihood that the errors were used to answer later general knowledge questions. Methodological implications of this finding are discussed, as it suggests that typical analyses likely underestimate the size of the Moses illusion. Overall, answering distorted questions can yield errors in the knowledge base; most importantly, prior knowledge does not protect against these negative memorial consequences.

  4. Comparison of the efficacy and technical accuracy of different rectangular collimators for intraoral radiography.

    PubMed

    Zhang, Wenjian; Abramovitch, Kenneth; Thames, Walter; Leon, Inga-Lill K; Colosi, Dan C; Goren, Arthur D

    2009-07-01

    The objective of this study was to compare the operating efficiency and technical accuracy of 3 different rectangular collimators. A full-mouth intraoral radiographic series excluding central incisor views were taken on training manikins by 2 groups of undergraduate dental and dental hygiene students. Three types of rectangular collimator were used: Type I ("free-hand"), Type II (mechanical interlocking), and Type III (magnetic collimator). Eighteen students exposed one side of the manikin with a Type I collimator and the other side with a Type II. Another 15 students exposed the manikin with Type I and Type III respectively. Type I is currently used for teaching and patient care at our institution and was considered as the control to which both Types II and III were compared. The time necessary to perform the procedure, subjective user friendliness, and the number of technique errors (placement, projection, and cone cut errors) were assessed. The Student t test or signed rank test was used to determine statistical difference (P

  5. Improvement in error propagation in the Shack-Hartmann-type zonal wavefront sensors.

    PubMed

    Pathak, Biswajit; Boruah, Bosanta R

    2017-12-01

    Estimation of the wavefront from measured slope values is an essential step in a Shack-Hartmann-type wavefront sensor. Using an appropriate estimation algorithm, these measured slopes are converted into wavefront phase values. Hence, accuracy in wavefront estimation lies in proper interpretation of these measured slope values using the chosen estimation algorithm. There are two important sources of errors associated with the wavefront estimation process, namely, the slope measurement error and the algorithm discretization error. The former type is due to the noise in the slope measurements or to the detector centroiding error, and the latter is a consequence of solving equations of a basic estimation algorithm adopted onto a discrete geometry. These errors deserve particular attention, because they decide the preference of a specific estimation algorithm for wavefront estimation. In this paper, we investigate these two important sources of errors associated with the wavefront estimation algorithms of Shack-Hartmann-type wavefront sensors. We consider the widely used Southwell algorithm and the recently proposed Pathak-Boruah algorithm [J. Opt.16, 055403 (2014)JOOPDB0150-536X10.1088/2040-8978/16/5/055403] and perform a comparative study between the two. We find that the latter algorithm is inherently superior to the Southwell algorithm in terms of the error propagation performance. We also conduct experiments that further establish the correctness of the comparative study between the said two estimation algorithms.

  6. What errors do peer reviewers detect, and does training improve their ability to detect them?

    PubMed

    Schroter, Sara; Black, Nick; Evans, Stephen; Godlee, Fiona; Osorio, Lyda; Smith, Richard

    2008-10-01

    To analyse data from a trial and report the frequencies with which major and minor errors are detected at a general medical journal, the types of errors missed and the impact of training on error detection. 607 peer reviewers at the BMJ were randomized to two intervention groups receiving different types of training (face-to-face training or a self-taught package) and a control group. Each reviewer was sent the same three test papers over the study period, each of which had nine major and five minor methodological errors inserted. BMJ peer reviewers. The quality of review, assessed using a validated instrument, and the number and type of errors detected before and after training. The number of major errors detected varied over the three papers. The interventions had small effects. At baseline (Paper 1) reviewers found an average of 2.58 of the nine major errors, with no notable difference between the groups. The mean number of errors reported was similar for the second and third papers, 2.71 and 3.0, respectively. Biased randomization was the error detected most frequently in all three papers, with over 60% of reviewers rejecting the papers identifying this error. Reviewers who did not reject the papers found fewer errors and the proportion finding biased randomization was less than 40% for each paper. Editors should not assume that reviewers will detect most major errors, particularly those concerned with the context of study. Short training packages have only a slight impact on improving error detection.

  7. Software fault-tolerance by design diversity DEDIX: A tool for experiments

    NASA Technical Reports Server (NTRS)

    Avizienis, A.; Gunningberg, P.; Kelly, J. P. J.; Lyu, R. T.; Strigini, L.; Traverse, P. J.; Tso, K. S.; Voges, U.

    1986-01-01

    The use of multiple versions of a computer program, independently designed from a common specification, to reduce the effects of an error is discussed. If these versions are designed by independent programming teams, it is expected that a fault in one version will not have the same behavior as any fault in the other versions. Since the errors in the output of the versions are different and uncorrelated, it is possible to run the versions concurrently, cross-check their results at prespecified points, and mask errors. A DEsign DIversity eXperiments (DEDIX) testbed was implemented to study the influence of common mode errors which can result in a failure of the entire system. The layered design of DEDIX and its decision algorithm are described.

  8. Unreliable numbers: error and harm induced by bad design can be reduced by better design

    PubMed Central

    Thimbleby, Harold; Oladimeji, Patrick; Cairns, Paul

    2015-01-01

    Number entry is a ubiquitous activity and is often performed in safety- and mission-critical procedures, such as healthcare, science, finance, aviation and in many other areas. We show that Monte Carlo methods can quickly and easily compare the reliability of different number entry systems. A surprising finding is that many common, widely used systems are defective, and induce unnecessary human error. We show that Monte Carlo methods enable designers to explore the implications of normal and unexpected operator behaviour, and to design systems to be more resilient to use error. We demonstrate novel designs with improved resilience, implying that the common problems identified and the errors they induce are avoidable. PMID:26354830

  9. Symmetric and Asymmetric Patterns of Attraction Errors in Producing Subject-Predicate Agreement in Hebrew: An Issue of Morphological Structure

    ERIC Educational Resources Information Center

    Deutsch, Avital; Dank, Maya

    2011-01-01

    A common characteristic of subject-predicate agreement errors (usually termed attraction errors) in complex noun phrases is an asymmetrical pattern of error distribution, depending on the inflectional state of the nouns comprising the complex noun phrase. That is, attraction is most likely to occur when the head noun is the morphologically…

  10. Mimicking Aphasic Semantic Errors in Normal Speech Production: Evidence from a Novel Experimental Paradigm

    ERIC Educational Resources Information Center

    Hodgson, Catherine; Lambon Ralph, Matthew A.

    2008-01-01

    Semantic errors are commonly found in semantic dementia (SD) and some forms of stroke aphasia and provide insights into semantic processing and speech production. Low error rates are found in standard picture naming tasks in normal controls. In order to increase error rates and thus provide an experimental model of aphasic performance, this study…

  11. Teaching Common Errors in Applying a Procedure. IDD&E Working Paper No. 18.

    ERIC Educational Resources Information Center

    Garduno, Alberto O.; And Others

    The purpose of this study was to replicate the Bentti, Golden, and Reigeluth study (1983), which explored the use of nonexamples to teach common errors as an effective strategy in teaching a procedure. A total of 24 undergraduate students enrolled in the Syracuse University Symphonic Band were randomly assigned to an experimental group and a…

  12. The Effect of Error in Item Parameter Estimates on the Test Response Function Method of Linking.

    ERIC Educational Resources Information Center

    Kaskowitz, Gary S.; De Ayala, R. J.

    2001-01-01

    Studied the effect of item parameter estimation for computation of linking coefficients for the test response function (TRF) linking/equating method. Simulation results showed that linking was more accurate when there was less error in the parameter estimates, and that 15 or 25 common items provided better results than 5 common items under both…

  13. Children's Overtensing Errors: Phonological and Lexical Effects on Syntax

    ERIC Educational Resources Information Center

    Stemberger, Joseph Paul

    2007-01-01

    Overtensing (the use of an inflected form in place of a nonfinite form, e.g. *"didn't broke" for target "didn't break") is common in early syntax. In a ChiLDES-based study of 36 children acquiring English, I examine the effects of phonological and lexical factors. For irregulars, errors are more common with verbs of low frequency and when…

  14. Effects of Listening Conditions, Error Types, and Ensemble Textures on Error Detection Skills

    ERIC Educational Resources Information Center

    Waggoner, Dori T.

    2011-01-01

    This study was designed with three main purposes: (a) to investigate the effects of two listening conditions on error detection accuracy, (b) to compare error detection responses for rhythm errors and pitch errors, and (c) to examine the influences of texture on error detection accuracy. Undergraduate music education students (N = 18) listened to…

  15. Graduate Students' Administration and Scoring Errors on the Woodcock-Johnson III Tests of Cognitive Abilities

    ERIC Educational Resources Information Center

    Ramos, Erica; Alfonso, Vincent C.; Schermerhorn, Susan M.

    2009-01-01

    The interpretation of cognitive test scores often leads to decisions concerning the diagnosis, educational placement, and types of interventions used for children. Therefore, it is important that practitioners administer and score cognitive tests without error. This study assesses the frequency and types of examiner errors that occur during the…

  16. The (mis)reporting of statistical results in psychology journals.

    PubMed

    Bakker, Marjan; Wicherts, Jelte M

    2011-09-01

    In order to study the prevalence, nature (direction), and causes of reporting errors in psychology, we checked the consistency of reported test statistics, degrees of freedom, and p values in a random sample of high- and low-impact psychology journals. In a second study, we established the generality of reporting errors in a random sample of recent psychological articles. Our results, on the basis of 281 articles, indicate that around 18% of statistical results in the psychological literature are incorrectly reported. Inconsistencies were more common in low-impact journals than in high-impact journals. Moreover, around 15% of the articles contained at least one statistical conclusion that proved, upon recalculation, to be incorrect; that is, recalculation rendered the previously significant result insignificant, or vice versa. These errors were often in line with researchers' expectations. We classified the most common errors and contacted authors to shed light on the origins of the errors.

  17. Common errors of drug administration in infants: causes and avoidance.

    PubMed

    Anderson, B J; Ellis, J F

    1999-01-01

    Drug administration errors are common in infants. Although the infant population has a high exposure to drugs, there are few data concerning pharmacokinetics or pharmacodynamics, or the influence of paediatric diseases on these processes. Children remain therapeutic orphans. Formulations are often suitable only for adults; in addition, the lack of maturation of drug elimination processes, alteration of body composition and influence of size render the calculation of drug doses complex in infants. The commonest drug administration error in infants is one of dose, and the commonest hospital site for this error is the intensive care unit. Drug errors are a consequence of system error, and preventive strategies are possible through system analysis. The goal of a zero drug error rate should be aggressively sought, with systems in place that aim to eliminate the effects of inevitable human error. This involves review of the entire system from drug manufacture to drug administration. The nuclear industry, telecommunications and air traffic control services all practise error reduction policies with zero error as a clear goal, not by finding fault in the individual, but by identifying faults in the system and building into that system mechanisms for picking up faults before they occur. Such policies could be adapted to medicine using interventions both specific (the production of formulations which are for children only and clearly labelled, regular audit by pharmacists, legible prescriptions, standardised dose tables) and general (paediatric drug trials, education programmes, nonpunitive error reporting) to reduce the number of errors made in giving medication to infants.

  18. Errors in Aviation Decision Making: Bad Decisions or Bad Luck?

    NASA Technical Reports Server (NTRS)

    Orasanu, Judith; Martin, Lynne; Davison, Jeannie; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    Despite efforts to design systems and procedures to support 'correct' and safe operations in aviation, errors in human judgment still occur and contribute to accidents. In this paper we examine how an NDM (naturalistic decision making) approach might help us to understand the role of decision processes in negative outcomes. Our strategy was to examine a collection of identified decision errors through the lens of an aviation decision process model and to search for common patterns. The second, and more difficult, task was to determine what might account for those patterns. The corpus we analyzed consisted of tactical decision errors identified by the NTSB (National Transportation Safety Board) from a set of accidents in which crew behavior contributed to the accident. A common pattern emerged: about three quarters of the errors represented plan-continuation errors, that is, a decision to continue with the original plan despite cues that suggested changing the course of action. Features in the context that might contribute to these errors were identified: (a) ambiguous dynamic conditions and (b) organizational and socially-induced goal conflicts. We hypothesize that 'errors' are mediated by underestimation of risk and failure to analyze the potential consequences of continuing with the initial plan. Stressors may further contribute to these effects. Suggestions for improving performance in these error-inducing contexts are discussed.

  19. Analyzing students’ errors on fractions in the number line

    NASA Astrophysics Data System (ADS)

    Widodo, S.; Ikhwanudin, T.

    2018-05-01

    The objectives of this study are to know the type of students’ errors when they deal with fractions on the number line. This study used qualitative with a descriptive method, and involved 31 sixth grade students at one of the primary schools in Purwakarta, Indonesia. The results of this study are as follow, there are four types of student’s errors: unit confusion, tick mark interpretation error, partitioning and un partitioning error, and estimation error. We recommend that teachers should: strengthen unit understanding to the students when studying fractions, make students understand about tick mark interpretation, remind student of the importance of partitioning and un-partitioning strategy and teaches effective estimation strategies.

  20. Errors analysis of problem solving using the Newman stage after applying cooperative learning of TTW type

    NASA Astrophysics Data System (ADS)

    Rr Chusnul, C.; Mardiyana, S., Dewi Retno

    2017-12-01

    Problem solving is the basis of mathematics learning. Problem solving teaches us to clarify an issue coherently in order to avoid misunderstanding information. Sometimes there may be mistakes in problem solving due to misunderstanding the issue, choosing a wrong concept or misapplied concept. The problem-solving test was carried out after students were given treatment on learning by using cooperative learning of TTW type. The purpose of this study was to elucidate student problem regarding to problem solving errors after learning by using cooperative learning of TTW type. Newman stages were used to identify problem solving errors in this study. The new research used a descriptive method to find out problem solving errors in students. The subject in this study were students of Vocational Senior High School (SMK) in 10th grade. Test and interview was conducted for data collection. Thus, the results of this study suggested problem solving errors in students after learning by using cooperative learning of TTW type for Newman stages.

  1. Using raindrop size distributions from different types of disdrometer to establish weather radar algorithms

    NASA Astrophysics Data System (ADS)

    Baldini, Luca; Adirosi, Elisa; Roberto, Nicoletta; Vulpiani, Gianfranco; Russo, Fabio; Napolitano, Francesco

    2015-04-01

    Radar precipitation retrieval uses several relationships that parameterize precipitation properties (like rainfall rate and liquid water content and attenuation (in case of radars at attenuated frequencies such as those at C- and X- band) as a function of combinations of radar measurements. The uncertainty in such relations highly affects the uncertainty precipitation and attenuation estimates. A commonly used method to derive such relationships is to apply regression methods to precipitation measurements and radar observables simulated from datasets of drop size distributions (DSD) using microphysical and electromagnetic assumptions. DSD datasets are determined both by theoretical considerations (i.e. based on the assumption that the radar always samples raindrops whose sizes follow a gamma distribution) or from experimental measurements collected throughout the years by disdrometers. In principle, using long-term disdrometer measurements provide parameterizations more representative of a specific climatology. However, instrumental errors, specific of a disdrometer, can affect the results. In this study, different weather radar algorithms resulting from DSDs collected by diverse types of disdrometers, namely 2D video disdrometer, first and second generation of OTT Parsivel laser disdrometer, and Thies Clima laser disdrometer, in the area of Rome (Italy) are presented and discussed to establish at what extent dual-polarization radar algorithms derived from experimental DSD datasets are influenced by the different error structure of the different type of disdrometers used to collect the data.

  2. Risk of Performance and Behavioral Health Decrements Due to Inadequate Cooperation, Coordination, Communication, and Psychosocial Adaptation within a Team

    NASA Technical Reports Server (NTRS)

    Landon, Lauren Blackwell; Vessey, William B.; Barrett, Jamie D.

    2015-01-01

    A team is defined as: "two or more individuals who interact socially and adaptively, have shared or common goals, and hold meaningful task interdependences; it is hierarchically structured and has a limited life span; in it expertise and roles are distributed; and it is embedded within an organization/environmental context that influences and is influenced by ongoing processes and performance outcomes" (Salas, Stagl, Burke, & Goodwin, 2007, p. 189). From the NASA perspective, a team is commonly understood to be a collection of individuals that is assigned to support and achieve a particular mission. Thus, depending on context, this definition can encompass both the spaceflight crew and the individuals and teams in the larger multi-team system who are assigned to support that crew during a mission. The Team Risk outcomes of interest are predominantly performance related, with a secondary emphasis on long-term health; this is somewhat unique in the NASA HRP in that most Risk areas are medically related and primarily focused on long-term health consequences. In many operational environments (e.g., aviation), performance is assessed as the avoidance of errors. However, the research on performance errors is ambiguous. It implies that actions may be dichotomized into "correct" or "incorrect" responses, where incorrect responses or errors are always undesirable. Researchers have argued that this dichotomy is a harmful oversimplification, and it would be more productive to focus on the variability of human performance and how organizations can manage that variability (Hollnagel, Woods, & Leveson, 2006) (Category III1). Two problems occur when focusing on performance errors: 1) the errors are infrequent and, therefore, difficult to observe and record; and 2) the errors do not directly correspond to failure. Research reveals that humans are fairly adept at correcting or compensating for performance errors before such errors result in recognizable or recordable failures. Astronauts are notably adept high performers. Most failures are recorded only when multiple, small errors occur and humans are unable to recognize and correct or compensate for these errors in time to prevent a failure (Dismukes, Berman, Loukopoulos, 2007) (Category III). More commonly, observers record variability in levels of performance. Some teams commit no observable errors but fail to achieve performance objectives or perform only adequately, while other teams commit some errors but perform spectacularly. Successful performance, therefore, cannot be viewed as simply the absence of errors or the avoidance of failure Johnson Space Center (JSC) Joint Leadership Team, 2008). While failure is commonly attributed to making a major error, focusing solely on the elimination of error(s) does not significantly reduce the risk of failure. Failure may also occur when performance is simply insufficient or an effort is incapable of adjusting sufficiently to a contextual change (e.g., changing levels of autonomy).

  3. Typing Style and the Use of Different Sources of Information during Typing: An Investigation Using Self-Reports

    PubMed Central

    Rieger, Martina; Bart, Victoria K. E.

    2016-01-01

    We investigated to what extent different sources of information are used in typing on a computer keyboard. Using self-reports 10 finger typists and idiosyncratic typists estimated how much attention they pay to different sources of information during copy typing and free typing and how much they use them for error detection. 10 finger typists reported less attention to the keyboard and the fingers and more attention to the template and the screen than idiosyncratic typists. The groups did not differ in attention to touch/kinaesthesis in copy typing and free typing, but 10 finger typists reported more use of touch/kinaesthesis in error detection. This indicates that processing of tactile/kinaesthetic information may occur largely outside conscious control, as long as no errors occur. 10 finger typists reported more use of internal prediction of movement consequences for error detection than idiosyncratic typists, reflecting more precise internal models. Further in copy typing compared to free typing attention to the template is required, thus leaving less attentional capacity for other sources of information. Correlations showed that higher skilled typists, regardless of typing style, rely more on sources of information which are usually associated with 10 finger typing. One limitation of the study is that only self-reports were used. We conclude that typing task, typing proficiency, and typing style influence how attention is distributed during typing. PMID:28018256

  4. Typing Style and the Use of Different Sources of Information during Typing: An Investigation Using Self-Reports.

    PubMed

    Rieger, Martina; Bart, Victoria K E

    2016-01-01

    We investigated to what extent different sources of information are used in typing on a computer keyboard. Using self-reports 10 finger typists and idiosyncratic typists estimated how much attention they pay to different sources of information during copy typing and free typing and how much they use them for error detection. 10 finger typists reported less attention to the keyboard and the fingers and more attention to the template and the screen than idiosyncratic typists. The groups did not differ in attention to touch/kinaesthesis in copy typing and free typing, but 10 finger typists reported more use of touch/kinaesthesis in error detection. This indicates that processing of tactile/kinaesthetic information may occur largely outside conscious control, as long as no errors occur. 10 finger typists reported more use of internal prediction of movement consequences for error detection than idiosyncratic typists, reflecting more precise internal models. Further in copy typing compared to free typing attention to the template is required, thus leaving less attentional capacity for other sources of information. Correlations showed that higher skilled typists, regardless of typing style, rely more on sources of information which are usually associated with 10 finger typing. One limitation of the study is that only self-reports were used. We conclude that typing task, typing proficiency, and typing style influence how attention is distributed during typing.

  5. Detection of gene-environment interactions in the presence of linkage disequilibrium and noise by using genetic risk scores with internal weights from elastic net regression.

    PubMed

    Hüls, Anke; Ickstadt, Katja; Schikowski, Tamara; Krämer, Ursula

    2017-06-12

    For the analysis of gene-environment (GxE) interactions commonly single nucleotide polymorphisms (SNPs) are used to characterize genetic susceptibility, an approach that mostly lacks power and has poor reproducibility. One promising approach to overcome this problem might be the use of weighted genetic risk scores (GRS), which are defined as weighted sums of risk alleles of gene variants. The gold-standard is to use external weights from published meta-analyses. In this study, we used internal weights from the marginal genetic effects of the SNPs estimated by a multivariate elastic net regression and thereby provided a method that can be used if there are no external weights available. We conducted a simulation study for the detection of GxE interactions and compared power and type I error of single SNPs analyses with Bonferroni correction and corresponding analysis with unweighted and our weighted GRS approach in scenarios with six risk SNPs and an increasing number of highly correlated (up to 210) and noise SNPs (up to 840). Applying weighted GRS increased the power enormously in comparison to the common single SNPs approach (e.g. 94.2% vs. 35.4%, respectively, to detect a weak interaction with an OR ≈ 1.04 for six uncorrelated risk SNPs and n = 700 with a well-controlled type I error). Furthermore, weighted GRS outperformed the unweighted GRS, in particular in the presence of SNPs without any effect on the phenotype (e.g. 90.1% vs. 43.9%, respectively, when 20 noise SNPs were added to the six risk SNPs). This outperforming of the weighted GRS was confirmed in a real data application on lung inflammation in the SALIA cohort (n = 402). However, in scenarios with a high number of noise SNPs (>200 vs. 6 risk SNPs), larger sample sizes are needed to avoid an increased type I error, whereas a high number of correlated SNPs can be handled even in small samples (e.g. n = 400). In conclusion, weighted GRS with weights from the marginal genetic effects of the SNPs estimated by a multivariate elastic net regression were shown to be a powerful tool to detect gene-environment interactions in scenarios of high Linkage disequilibrium and noise.

  6. Consequences of common data analysis inaccuracies in CNS trauma injury basic research.

    PubMed

    Burke, Darlene A; Whittemore, Scott R; Magnuson, David S K

    2013-05-15

    The development of successful treatments for humans after traumatic brain or spinal cord injuries (TBI and SCI, respectively) requires animal research. This effort can be hampered when promising experimental results cannot be replicated because of incorrect data analysis procedures. To identify and hopefully avoid these errors in future studies, the articles in seven journals with the highest number of basic science central nervous system TBI and SCI animal research studies published in 2010 (N=125 articles) were reviewed for their data analysis procedures. After identifying the most common statistical errors, the implications of those findings were demonstrated by reanalyzing previously published data from our laboratories using the identified inappropriate statistical procedures, then comparing the two sets of results. Overall, 70% of the articles contained at least one type of inappropriate statistical procedure. The highest percentage involved incorrect post hoc t-tests (56.4%), followed by inappropriate parametric statistics (analysis of variance and t-test; 37.6%). Repeated Measures analysis was inappropriately missing in 52.0% of all articles and, among those with behavioral assessments, 58% were analyzed incorrectly. Reanalysis of our published data using the most common inappropriate statistical procedures resulted in a 14.1% average increase in significant effects compared to the original results. Specifically, an increase of 15.5% occurred with Independent t-tests and 11.1% after incorrect post hoc t-tests. Utilizing proper statistical procedures can allow more-definitive conclusions, facilitate replicability of research results, and enable more accurate translation of those results to the clinic.

  7. Error monitoring issues for common channel signaling

    NASA Astrophysics Data System (ADS)

    Hou, Victor T.; Kant, Krishna; Ramaswami, V.; Wang, Jonathan L.

    1994-04-01

    Motivated by field data which showed a large number of link changeovers and incidences of link oscillations between in-service and out-of-service states in common channel signaling (CCS) networks, a number of analyses of the link error monitoring procedures in the SS7 protocol were performed by the authors. This paper summarizes the results obtained thus far and include the following: (1) results of an exact analysis of the performance of the error monitoring procedures under both random and bursty errors; (2) a demonstration that there exists a range of error rates within which the error monitoring procedures of SS7 may induce frequent changeovers and changebacks; (3) an analysis of the performance ofthe SS7 level-2 transmission protocol to determine the tolerable error rates within which the delay requirements can be met; (4) a demonstration that the tolerable error rate depends strongly on various link and traffic characteristics, thereby implying that a single set of error monitor parameters will not work well in all situations; (5) some recommendations on a customizable/adaptable scheme of error monitoring with a discussion on their implementability. These issues may be particularly relevant in the presence of anticipated increases in SS7 traffic due to widespread deployment of Advanced Intelligent Network (AIN) and Personal Communications Service (PCS) as well as for developing procedures for high-speed SS7 links currently under consideration by standards bodies.

  8. A methodology for translating positional error into measures of attribute error, and combining the two error sources

    Treesearch

    Yohay Carmel; Curtis Flather; Denis Dean

    2006-01-01

    This paper summarizes our efforts to investigate the nature, behavior, and implications of positional error and attribute error in spatiotemporal datasets. Estimating the combined influence of these errors on map analysis has been hindered by the fact that these two error types are traditionally expressed in different units (distance units, and categorical units,...

  9. Automatic-repeat-request error control schemes

    NASA Technical Reports Server (NTRS)

    Lin, S.; Costello, D. J., Jr.; Miller, M. J.

    1983-01-01

    Error detection incorporated with automatic-repeat-request (ARQ) is widely used for error control in data communication systems. This method of error control is simple and provides high system reliability. If a properly chosen code is used for error detection, virtually error-free data transmission can be attained. Various types of ARQ and hybrid ARQ schemes, and error detection using linear block codes are surveyed.

  10. Interferometer for Measuring Displacement to Within 20 pm

    NASA Technical Reports Server (NTRS)

    Zhao, Feng

    2003-01-01

    An optical heterodyne interferometer that can be used to measure linear displacements with an error <=20 pm has been developed. The remarkable accuracy of this interferometer is achieved through a design that includes (1) a wavefront split that reduces (relative to amplitude splits used in other interferometers) self interference and (2) a common-optical-path configuration that affords common-mode cancellation of the interference effects of thermal-expansion changes in optical-path lengths. The most popular method of displacement- measuring interferometry involves two beams, the polarizations of which are meant to be kept orthogonal upstream of the final interference location, where the difference between the phases of the two beams is measured. Polarization leakages (deviations from the desired perfect orthogonality) contaminate the phase measurement with periodic nonlinear errors. In commercial interferometers, these phase-measurement errors result in displacement errors in the approximate range of 1 to 10 nm. Moreover, because prior interferometers lack compensation for thermal-expansion changes in optical-path lengths, they are subject to additional displacement errors characterized by a temperature sensitivity of about 100 nm/K. Because the present interferometer does not utilize polarization in the separation and combination of the two interfering beams and because of the common-mode cancellation of thermal-expansion effects, the periodic nonlinear errors and the sensitivity to temperature changes are much smaller than in other interferometers

  11. Errors detected in pediatric oral liquid medication doses prepared in an automated workflow management system.

    PubMed

    Bledsoe, Sarah; Van Buskirk, Alex; Falconer, R James; Hollon, Andrew; Hoebing, Wendy; Jokic, Sladan

    2018-02-01

    The effectiveness of barcode-assisted medication preparation (BCMP) technology on detecting oral liquid dose preparation errors. From June 1, 2013, through May 31, 2014, a total of 178,344 oral doses were processed at Children's Mercy, a 301-bed pediatric hospital, through an automated workflow management system. Doses containing errors detected by the system's barcode scanning system or classified as rejected by the pharmacist were further reviewed. Errors intercepted by the barcode-scanning system were classified as (1) expired product, (2) incorrect drug, (3) incorrect concentration, and (4) technological error. Pharmacist-rejected doses were categorized into 6 categories based on the root cause of the preparation error: (1) expired product, (2) incorrect concentration, (3) incorrect drug, (4) incorrect volume, (5) preparation error, and (6) other. Of the 178,344 doses examined, 3,812 (2.1%) errors were detected by either the barcode-assisted scanning system (1.8%, n = 3,291) or a pharmacist (0.3%, n = 521). The 3,291 errors prevented by the barcode-assisted system were classified most commonly as technological error and incorrect drug, followed by incorrect concentration and expired product. Errors detected by pharmacists were also analyzed. These 521 errors were most often classified as incorrect volume, preparation error, expired product, other, incorrect drug, and incorrect concentration. BCMP technology detected errors in 1.8% of pediatric oral liquid medication doses prepared in an automated workflow management system, with errors being most commonly attributed to technological problems or incorrect drugs. Pharmacists rejected an additional 0.3% of studied doses. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  12. Normal accidents: human error and medical equipment design.

    PubMed

    Dain, Steven

    2002-01-01

    High-risk systems, which are typical of our technologically complex era, include not just nuclear power plants but also hospitals, anesthesia systems, and the practice of medicine and perfusion. In high-risk systems, no matter how effective safety devices are, some types of accidents are inevitable because the system's complexity leads to multiple and unexpected interactions. It is important for healthcare providers to apply a risk assessment and management process to decisions involving new equipment and procedures or staffing matters in order to minimize the residual risks of latent errors, which are amenable to correction because of the large window of opportunity for their detection. This article provides an introduction to basic risk management and error theory principles and examines ways in which they can be applied to reduce and mitigate the inevitable human errors that accompany high-risk systems. The article also discusses "human factor engineering" (HFE), the process which is used to design equipment/ human interfaces in order to mitigate design errors. The HFE process involves interaction between designers and endusers to produce a series of continuous refinements that are incorporated into the final product. The article also examines common design problems encountered in the operating room that may predispose operators to commit errors resulting in harm to the patient. While recognizing that errors and accidents are unavoidable, organizations that function within a high-risk system must adopt a "safety culture" that anticipates problems and acts aggressively through an anonymous, "blameless" reporting mechanism to resolve them. We must continuously examine and improve the design of equipment and procedures, personnel, supplies and materials, and the environment in which we work to reduce error and minimize its effects. Healthcare providers must take a leading role in the day-to-day management of the "Perioperative System" and be a role model in promoting a culture of safety in their organizations.

  13. Minimizing driver errors: examining factors leading to failed target tracking and detection.

    DOT National Transportation Integrated Search

    2013-06-01

    Driving a motor vehicle is a common practice for many individuals. Although driving becomes : repetitive and a very habitual task, errors can occur that lead to accidents. One factor that can be a : cause for such errors is a lapse in attention or a ...

  14. Why Does a Method That Fails Continue To Be Used: The Answer

    PubMed Central

    Templeton, Alan R.

    2009-01-01

    It has been claimed that hundreds of researchers use nested clade phylogeographic analysis (NCPA) based on what the method promises rather than requiring objective validation of the method. The supposed failure of NCPA is based upon the argument that validating it by using positive controls ignored type I error, and that computer simulations have shown a high type I error. The first argument is factually incorrect: the previously published validation analysis fully accounted for both type I and type II errors. The simulations that indicate a 75% type I error rate have serious flaws and only evaluate outdated versions of NCPA. These outdated type I error rates fall precipitously when the 2003 version of single locus NCPA is used or when the 2002 multi-locus version of NCPA is used. It is shown that the treewise type I errors in single-locus NCPA can be corrected to the desired nominal level by a simple statistical procedure, and that multilocus NCPA reconstructs a simulated scenario used to discredit NCPA with 100% accuracy. Hence, NCPA is a not a failed method at all, but rather has been validated both by actual data and by simulated data in a manner that satisfies the published criteria given by its critics. The critics have come to different conclusions because they have focused on the pre-2002 versions of NCPA and have failed to take into account the extensive developments in NCPA since 2002. Hence, researchers can choose to use NCPA based upon objective critical validation that shows that NCPA delivers what it promises. PMID:19335340

  15. Multiframe video coding for improved performance over wireless channels.

    PubMed

    Budagavi, M; Gibson, J D

    2001-01-01

    We propose and evaluate a multi-frame extension to block motion compensation (BMC) coding of videoconferencing-type video signals for wireless channels. The multi-frame BMC (MF-BMC) coder makes use of the redundancy that exists across multiple frames in typical videoconferencing sequences to achieve additional compression over that obtained by using the single frame BMC (SF-BMC) approach, such as in the base-level H.263 codec. The MF-BMC approach also has an inherent ability of overcoming some transmission errors and is thus more robust when compared to the SF-BMC approach. We model the error propagation process in MF-BMC coding as a multiple Markov chain and use Markov chain analysis to infer that the use of multiple frames in motion compensation increases robustness. The Markov chain analysis is also used to devise a simple scheme which randomizes the selection of the frame (amongst the multiple previous frames) used in BMC to achieve additional robustness. The MF-BMC coders proposed are a multi-frame extension of the base level H.263 coder and are found to be more robust than the base level H.263 coder when subjected to simulated errors commonly encountered on wireless channels.

  16. Analysis of Errors Committed by Physics Students in Secondary Schools in Ilorin Metropolis, Nigeria

    ERIC Educational Resources Information Center

    Omosewo, Esther Ore; Akanbi, Abdulrasaq Oladimeji

    2013-01-01

    The study attempt to find out the types of error committed and influence of gender on the type of error committed by senior secondary school physics students in metropolis. Six (6) schools were purposively chosen for the study. One hundred and fifty five students' scripts were randomly sampled for the study. Joint Mock physics essay questions…

  17. Error Types and Error Positions in Neglect Dyslexia: Comparative Analyses in Neglect Patients and Healthy Controls

    ERIC Educational Resources Information Center

    Weinzierl, Christiane; Kerkhoff, Georg; van Eimeren, Lucia; Keller, Ingo; Stenneken, Prisca

    2012-01-01

    Unilateral spatial neglect frequently involves a lateralised reading disorder, neglect dyslexia (ND). Reading of single words in ND is characterised by left-sided omissions and substitutions of letters. However, it is unclear whether the distribution of error types and positions within a word shows a unique pattern of ND when directly compared to…

  18. Evaluating software development characteristics: A comparison of software errors in different environments

    NASA Technical Reports Server (NTRS)

    Weiss, D. M.

    1981-01-01

    Error data obtained from two different software development environments are compared. To obtain data that was complete, accurate, and meaningful, a goal-directed data collection methodology was used. Changes made to software were monitored concurrently with its development. Similarities common to both environments are included: (1) the principal error was in the design and implementation of single routines; (2) few errors were the result of changes, required more than one attempt to correct, and resulted in other errors; (3) relatively few errors took more than a day to correct.

  19. Common MRI acquisition non-idealities significantly impact the output of the boundary shift integral method of measuring brain atrophy on serial MRI.

    PubMed

    Preboske, Gregory M; Gunter, Jeff L; Ward, Chadwick P; Jack, Clifford R

    2006-05-01

    Measuring rates of brain atrophy from serial magnetic resonance imaging (MRI) studies is an attractive way to assess disease progression in neurodegenerative disorders, particularly Alzheimer's disease (AD). A widely recognized approach is the boundary shift integral (BSI). The objective of this study was to evaluate how several common scan non-idealities affect the output of the BSI algorithm. We created three types of image non-idealities between the image volumes in a serial pair used to measure between-scan change: inconsistent image contrast between serial scans, head motion, and poor signal-to-noise (SNR). In theory the BSI volume difference measured between each pair of images should be zero and any deviation from zero should represent corruption of the BSI measurement by some non-ideality intentionally introduced into the second scan in the pair. Two different BSI measures were evaluated, whole brain and ventricle. As the severity of motion, noise, and non-congruent image contrast increased in the second scan, the calculated BSI values deviated progressively more from the expected value of zero. This study illustrates the magnitude of the error in measures of change in brain and ventricle volume across serial MRI scans that can result from commonly encountered deviations from ideal image quality. The magnitudes of some of the measurement errors seen in this study exceed the disease effect in AD shown in various publications, which range from 1% to 2.78% per year for whole brain atrophy and 5.4% to 13.8% per year for ventricle expansion (Table 1). For example, measurement error may exceed 100% if image contrast properties dramatically differ between the two scans in a measurement pair. Methods to maximize consistency of image quality over time are an essential component of any quantitative serial MRI study.

  20. Characteristics of the Traumatic Forensic Cases Admitted To Emergency Department and Errors in the Forensic Report Writing.

    PubMed

    Aktas, Nurettin; Gulacti, Umut; Lok, Ugur; Aydin, İrfan; Borta, Tayfun; Celik, Murat

    2018-01-01

    To identify errors in forensic reports and to describe the characteristics of traumatic medico-legal cases presenting to the emergency department (ED) at a tertiary care hospital. This study is a retrospective cross-sectional study. The study includes cases resulting in a forensic report among all traumatic patients presenting to the ED of Adiyaman University Training and Research Hospital, Adiyaman, Turkey during a 1-year period. We recorded the demographic characteristics of all the cases, time of presentation to the ED, traumatic characteristics of medico-legal cases, forms of suicide attempt, suspected poisonous substance exposure, the result of follow-up and the type of forensic report. A total of 4300 traumatic medico-legal cases were included in the study and 72% of these cases were male. Traumatic medico-legal cases occurred at the greatest frequency in July (10.1%) and 28.9% of all cases occurred in summer. The most frequent causes of traumatic medico-legal cases in the ED were traffic accidents (43.4%), violent crime (30.5%), and suicide attempt (7.2%). The most common method of attempted suicide was drug intake (86.4%). 12.3% of traumatic medico-legal cases were hospitalized and 24.2% of those hospitalized were admitted to the orthopedics service. The most common error in forensic reports was the incomplete recording of the patient's "cooperation" status (82.7%). Additionally, external traumatic lesions were not defined in 62.4% of forensic reports. The majority of traumatic medico-legal cases were male age 18-44 years, the most common source of trauma was traffic accidents and in the summer months. When writing a forensic report, emergency physicians made mistakes in noting physical examination findings and identifying external traumatic lesions. Physicians should make sure that the traumatic medico-legal patients they treat have adequate documentation for reference during legal proceedings. The legal duties and responsibilities of physicians should be emphasized with in-service training.

  1. Avoiding and identifying errors in health technology assessment models: qualitative study and methodological review.

    PubMed

    Chilcott, J; Tappenden, P; Rawdin, A; Johnson, M; Kaltenthaler, E; Paisley, S; Papaioannou, D; Shippam, A

    2010-05-01

    Health policy decisions must be relevant, evidence-based and transparent. Decision-analytic modelling supports this process but its role is reliant on its credibility. Errors in mathematical decision models or simulation exercises are unavoidable but little attention has been paid to processes in model development. Numerous error avoidance/identification strategies could be adopted but it is difficult to evaluate the merits of strategies for improving the credibility of models without first developing an understanding of error types and causes. The study aims to describe the current comprehension of errors in the HTA modelling community and generate a taxonomy of model errors. Four primary objectives are to: (1) describe the current understanding of errors in HTA modelling; (2) understand current processes applied by the technology assessment community for avoiding errors in development, debugging and critically appraising models for errors; (3) use HTA modellers' perceptions of model errors with the wider non-HTA literature to develop a taxonomy of model errors; and (4) explore potential methods and procedures to reduce the occurrence of errors in models. It also describes the model development process as perceived by practitioners working within the HTA community. A methodological review was undertaken using an iterative search methodology. Exploratory searches informed the scope of interviews; later searches focused on issues arising from the interviews. Searches were undertaken in February 2008 and January 2009. In-depth qualitative interviews were performed with 12 HTA modellers from academic and commercial modelling sectors. All qualitative data were analysed using the Framework approach. Descriptive and explanatory accounts were used to interrogate the data within and across themes and subthemes: organisation, roles and communication; the model development process; definition of error; types of model error; strategies for avoiding errors; strategies for identifying errors; and barriers and facilitators. There was no common language in the discussion of modelling errors and there was inconsistency in the perceived boundaries of what constitutes an error. Asked about the definition of model error, there was a tendency for interviewees to exclude matters of judgement from being errors and focus on 'slips' and 'lapses', but discussion of slips and lapses comprised less than 20% of the discussion on types of errors. Interviewees devoted 70% of the discussion to softer elements of the process of defining the decision question and conceptual modelling, mostly the realms of judgement, skills, experience and training. The original focus concerned model errors, but it may be more useful to refer to modelling risks. Several interviewees discussed concepts of validation and verification, with notable consistency in interpretation: verification meaning the process of ensuring that the computer model correctly implemented the intended model, whereas validation means the process of ensuring that a model is fit for purpose. Methodological literature on verification and validation of models makes reference to the Hermeneutic philosophical position, highlighting that the concept of model validation should not be externalized from the decision-makers and the decision-making process. Interviewees demonstrated examples of all major error types identified in the literature: errors in the description of the decision problem, in model structure, in use of evidence, in implementation of the model, in operation of the model, and in presentation and understanding of results. The HTA error classifications were compared against existing classifications of model errors in the literature. A range of techniques and processes are currently used to avoid errors in HTA models: engaging with clinical experts, clients and decision-makers to ensure mutual understanding, producing written documentation of the proposed model, explicit conceptual modelling, stepping through skeleton models with experts, ensuring transparency in reporting, adopting standard housekeeping techniques, and ensuring that those parties involved in the model development process have sufficient and relevant training. Clarity and mutual understanding were identified as key issues. However, their current implementation is not framed within an overall strategy for structuring complex problems. Some of the questioning may have biased interviewees responses but as all interviewees were represented in the analysis no rebalancing of the report was deemed necessary. A potential weakness of the literature review was its focus on spreadsheet and program development rather than specifically on model development. It should also be noted that the identified literature concerning programming errors was very narrow despite broad searches being undertaken. Published definitions of overall model validity comprising conceptual model validation, verification of the computer model, and operational validity of the use of the model in addressing the real-world problem are consistent with the views expressed by the HTA community and are therefore recommended as the basis for further discussions of model credibility. Such discussions should focus on risks, including errors of implementation, errors in matters of judgement and violations. Discussions of modelling risks should reflect the potentially complex network of cognitive breakdowns that lead to errors in models and existing research on the cognitive basis of human error should be included in an examination of modelling errors. There is a need to develop a better understanding of the skills requirements for the development, operation and use of HTA models. Interaction between modeller and client in developing mutual understanding of a model establishes that model's significance and its warranty. This highlights that model credibility is the central concern of decision-makers using models so it is crucial that the concept of model validation should not be externalized from the decision-makers and the decision-making process. Recommendations for future research would be studies of verification and validation; the model development process; and identification of modifications to the modelling process with the aim of preventing the occurrence of errors and improving the identification of errors in models.

  2. Hypothesis Testing Using Factor Score Regression

    PubMed Central

    Devlieger, Ines; Mayer, Axel; Rosseel, Yves

    2015-01-01

    In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and with structural equation modeling (SEM) by using analytic calculations and two Monte Carlo simulation studies to examine their finite sample characteristics. Several performance criteria are used, such as the bias using the unstandardized and standardized parameterization, efficiency, mean square error, standard error bias, type I error rate, and power. The results show that the bias correcting method, with the newly developed standard error, is the only suitable alternative for SEM. While it has a higher standard error bias than SEM, it has a comparable bias, efficiency, mean square error, power, and type I error rate. PMID:29795886

  3. Reliability of clinical impact grading by healthcare professionals of common prescribing error and optimisation cases in critical care patients.

    PubMed

    Bourne, Richard S; Shulman, Rob; Tomlin, Mark; Borthwick, Mark; Berry, Will; Mills, Gary H

    2017-04-01

    To identify between and within profession-rater reliability of clinical impact grading for common critical care prescribing error and optimisation cases. To identify representative clinical impact grades for each individual case. Electronic questionnaire. 5 UK NHS Trusts. 30 Critical care healthcare professionals (doctors, pharmacists and nurses). Participants graded severity of clinical impact (5-point categorical scale) of 50 error and 55 optimisation cases. Case between and within profession-rater reliability and modal clinical impact grading. Between and within profession rater reliability analysis used linear mixed model and intraclass correlation, respectively. The majority of error and optimisation cases (both 76%) had a modal clinical severity grade of moderate or higher. Error cases: doctors graded clinical impact significantly lower than pharmacists (-0.25; P < 0.001) and nurses (-0.53; P < 0.001), with nurses significantly higher than pharmacists (0.28; P < 0.001). Optimisation cases: doctors graded clinical impact significantly lower than nurses and pharmacists (-0.39 and -0.5; P < 0.001, respectively). Within profession reliability grading was excellent for pharmacists (0.88 and 0.89; P < 0.001) and doctors (0.79 and 0.83; P < 0.001) but only fair to good for nurses (0.43 and 0.74; P < 0.001), for optimisation and error cases, respectively. Representative clinical impact grades for over 100 common prescribing error and optimisation cases are reported for potential clinical practice and research application. The between professional variability highlights the importance of multidisciplinary perspectives in assessment of medication error and optimisation cases in clinical practice and research. © The Author 2017. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  4. Reading difficulties in Albanian.

    PubMed

    Avdyli, Rrezarta; Cuetos, Fernando

    2012-10-01

    Albanian is an Indo-European language with a shallow orthography, in which there is an absolute correspondence between graphemes and phonemes. We aimed to know reading strategies used by Albanian disabled children during word and pseudoword reading. A pool of 114 Kosovar reading disabled children matched with 150 normal readers aged 6 to 11 years old were tested. They had to read 120 stimuli varied in lexicality, frequency, and length. The results in terms of reading accuracy as well as in reading times show that both groups were affected by lexicality and length effects. In both groups, length and lexicality effects were significantly modulated by school year being greater in early grades and later diminish in length and just the opposite in lexicality. However, the reading difficulties group was less accurate and slower than the control group across all school grades. Analyses of the error patterns showed that phonological errors, when the letter replacement leading to new nonwords, are the most common error type in both groups, although as grade rises, visual errors and lexicalizations increased more in the control group than the reading difficulties group. These findings suggest that Albanian normal children use both routes (lexical and sublexical) from the beginning of reading despite of the complete regularity of Albanian, while children with reading difficulties start using sublexical reading and the lexical reading takes more time to acquire, but finally both routes are functional.

  5. Preventing dispensing errors by alerting for drug confusions in the pharmacy information system-A survey of users.

    PubMed

    Campmans, Zizi; van Rhijn, Arianne; Dull, René M; Santen-Reestman, Jacqueline; Taxis, Katja; Borgsteede, Sander D

    2018-01-01

    Drug confusion is thought to be the most common type of dispensing error. Several strategies can be implemented to reduce the risk of medication errors. One of these are alerts in the pharmacy information system. To evaluate the experiences of pharmacists and pharmacy technicians with alerts for drug name and strength confusion. In May 2017, a cross-sectional survey of pharmacists and pharmacy technicians was performed in community pharmacies in the Netherlands using an online questionnaire. Of the 269 respondents, 86% (n = 230) had noticed the alert for drug name confusion, and 26% (n = 67) for drug strength confusion. Of those 230, 9% (n = 20) had experienced that the alert had prevented dispensing the wrong drug. For drug strength confusion, this proportion was 12% (n = 8). Respondents preferred to have an alert for drug name and strength confusion in the pharmacy information system. 'Alert fatigue' was an important issue, so alerts should only be introduced for frequent confusions or confusions with serious consequences. Pharmacists and pharmacy technicians were positive about having alerts for drug confusions in their pharmacy information system and experienced that alerts contributed to the prevention of dispensing errors. To prevent alert fatigue, it was considered important not to include all possible confusions as a new alert: the potential contribution to the prevention of drug confusion should be weighed against the risk of alert fatigue.

  6. Warning: This keyboard will deconstruct--the role of the keyboard in skilled typewriting.

    PubMed

    Crump, Matthew J C; Logan, Gordon D

    2010-06-01

    Skilled actions are commonly assumed to be controlled by precise internal schemas or cognitive maps. We challenge these ideas in the context of skilled typing, where prominent theories assume that typing is controlled by a well-learned cognitive map that plans finger movements without feedback. In two experiments, we demonstrate that online physical interaction with the keyboard critically mediates typing skill. Typists performed single-word and paragraph typing tasks on a regular keyboard, a laser-projection keyboard, and two deconstructed keyboards, made by removing successive layers of a regular keyboard. Averaged over the laser and deconstructed keyboards, response times for the first keystroke increased by 37%, the interval between keystrokes increased by 120%, and error rate increased by 177%, relative to those of the regular keyboard. A schema view predicts no influence of external motor feedback, because actions could be planned internally with high precision. We argue that the expert knowledge mediating action control emerges during online interaction with the physical environment.

  7. Band-gap corrected density functional theory calculations for InAs/GaSb type II superlattices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jianwei; Zhang, Yong

    2014-12-07

    We performed pseudopotential based density functional theory (DFT) calculations for GaSb/InAs type II superlattices (T2SLs), with bandgap errors from the local density approximation mitigated by applying an empirical method to correct the bulk bandgaps. Specifically, this work (1) compared the calculated bandgaps with experimental data and non-self-consistent atomistic methods; (2) calculated the T2SL band structures with varying structural parameters; (3) investigated the interfacial effects associated with the no-common-atom heterostructure; and (4) studied the strain effect due to lattice mismatch between the two components. This work demonstrates the feasibility of applying the DFT method to more exotic heterostructures and defect problemsmore » related to this material system.« less

  8. Development of a Stereovision-Based Technique to Measure the Spread Patterns of Granular Fertilizer Spreaders

    PubMed Central

    Cool, Simon R.; Pieters, Jan G.; Seatovic, Dejan; Mertens, Koen C.; Nuyttens, David; Van De Gucht, Tim C.; Vangeyte, Jürgen

    2017-01-01

    Centrifugal fertilizer spreaders are by far the most commonly used granular fertilizer spreader type in Europe. Their spread pattern however is error-prone, potentially leading to an undesired distribution of particles in the field and losses out of the field, which is often caused by poor calibration of the spreader for the specific fertilizer used. Due to the large environmental impact of fertilizer use, it is important to optimize the spreading process and minimize these errors. Spreader calibrations can be performed by using collection trays to determine the (field) spread pattern, but this is very time-consuming and expensive for the farmer and hence not common practice. Therefore, we developed an innovative multi-camera system to predict the spread pattern in a fast and accurate way, independent of the spreader configuration. Using high-speed stereovision, ejection parameters of particles leaving the spreader vanes were determined relative to a coordinate system associated with the spreader. The landing positions and subsequent spread patterns were determined using a ballistic model incorporating the effect of tractor motion and wind. Experiments were conducted with a commercial spreader and showed a high repeatability. The results were transformed to one spatial dimension to enable comparison with transverse spread patterns determined in the field and showed similar results. PMID:28617339

  9. Effect of a limited-enforcement intelligent tutoring system in dermatopathology on student errors, goals and solution paths.

    PubMed

    Payne, Velma L; Medvedeva, Olga; Legowski, Elizabeth; Castine, Melissa; Tseytlin, Eugene; Jukic, Drazen; Crowley, Rebecca S

    2009-11-01

    Determine effects of a limited-enforcement intelligent tutoring system in dermatopathology on student errors, goals and solution paths. Determine if limited enforcement in a medical tutoring system inhibits students from learning the optimal and most efficient solution path. Describe the type of deviations from the optimal solution path that occur during tutoring, and how these deviations change over time. Determine if the size of the problem-space (domain scope), has an effect on learning gains when using a tutor with limited enforcement. Analyzed data mined from 44 pathology residents using SlideTutor-a Medical Intelligent Tutoring System in Dermatopathology that teaches histopathologic diagnosis and reporting skills based on commonly used diagnostic algorithms. Two subdomains were included in the study representing sub-algorithms of different sizes and complexities. Effects of the tutoring system on student errors, goal states and solution paths were determined. Students gradually increase the frequency of steps that match the tutoring system's expectation of expert performance. Frequency of errors gradually declines in all categories of error significance. Student performance frequently differs from the tutor-defined optimal path. However, as students continue to be tutored, they approach the optimal solution path. Performance in both subdomains was similar for both errors and goal differences. However, the rate at which students progress toward the optimal solution path differs between the two domains. Tutoring in superficial perivascular dermatitis, the larger and more complex domain was associated with a slower rate of approximation towards the optimal solution path. Students benefit from a limited-enforcement tutoring system that leverages diagnostic algorithms but does not prevent alternative strategies. Even with limited enforcement, students converge toward the optimal solution path.

  10. Prevalence of the refractive errors by age and gender: the Mashhad eye study of Iran.

    PubMed

    Ostadimoghaddam, Hadi; Fotouhi, Akbar; Hashemi, Hassan; Yekta, Abbasali; Heravian, Javad; Rezvan, Farhad; Ghadimi, Hamidreza; Rezvan, Bijan; Khabazkhoob, Mehdi

    2011-11-01

    Refractive errors are a common eye problem. Considering the low number of population-based studies in Iran in this regard, we decided to determine the prevalence rates of myopia and hyperopia in a population in Mashhad, Iran. Cross-sectional population-based study. Random cluster sampling. Of 4453 selected individuals from the urban population of Mashhad, 70.4% participated. Refractive error was determined using manifest (age > 15 years) and cycloplegic refraction (age ≤ 15 years). Myopia was defined as a spherical equivalent of -0.5 diopter or worse. An spherical equivalent of +0.5 diopter or worse for non-cycloplegic refraction and an spherical equivalent of +2 diopter or worse for cycloplegic refraction was used to define hyperopia. Prevalence of refractive errors. The prevalence of myopia and hyperopia in individuals ≤ 15 years old was 3.64% (95% CI: 2.19-5.09) and 27.4% (95% CI: 23.72-31.09), respectively. The same measurements for subjects > 15 years of age was 22.36% (95% CI: 20.06-24.66) and 34.21% (95% CI: 31.57-36.85), respectively. Myopia was found to increase with age in individuals ≤ 15 years and decrease with age in individuals > 15 years of age. The rate of hyperopia showed a significant increase with age in individuals > 15 years. The prevalence of astigmatism was 25.64% (95% CI: 23.76-27.51). In children and the elderly, hyperopia is the most prevalent refractive error. After hyperopia, astigmatism is also of importance in older ages. Age is the most important demographic factor associated with different types of refractive errors. © 2011 The Authors. Clinical and Experimental Ophthalmology © 2011 Royal Australian and New Zealand College of Ophthalmologists.

  11. Predictive accuracy of a ground-water model--Lessons from a postaudit

    USGS Publications Warehouse

    Konikow, Leonard F.

    1986-01-01

    Hydrogeologic studies commonly include the development, calibration, and application of a deterministic simulation model. To help assess the value of using such models to make predictions, a postaudit was conducted on a previously studied area in the Salt River and lower Santa Cruz River basins in central Arizona. A deterministic, distributed-parameter model of the ground-water system in these alluvial basins was calibrated by Anderson (1968) using about 40 years of data (1923–64). The calibrated model was then used to predict future water-level changes during the next 10 years (1965–74). Examination of actual water-level changes in 77 wells from 1965–74 indicates a poor correlation between observed and predicted water-level changes. The differences have a mean of 73 ft that is, predicted declines consistently exceeded those observed and a standard deviation of 47 ft. The bias in the predicted water-level change can be accounted for by the large error in the assumed total pumpage during the prediction period. However, the spatial distribution of errors in predicted water-level change does not correlate with the spatial distribution of errors in pumpage. Consequently, the lack of precision probably is not related only to errors in assumed pumpage, but may indicate the presence of other sources of error in the model, such as the two-dimensional representation of a three-dimensional problem or the lack of consideration of land-subsidence processes. This type of postaudit is a valuable method of verifying a model, and an evaluation of predictive errors can provide an increased understanding of the system and aid in assessing the value of undertaking development of a revised model.

  12. Working memory load impairs the evaluation of behavioral errors in the medial frontal cortex.

    PubMed

    Maier, Martin E; Steinhauser, Marco

    2017-10-01

    Early error monitoring in the medial frontal cortex enables error detection and the evaluation of error significance, which helps prioritize adaptive control. This ability has been assumed to be independent from central capacity, a limited pool of resources assumed to be involved in cognitive control. The present study investigated whether error evaluation depends on central capacity by measuring the error-related negativity (Ne/ERN) in a flanker paradigm while working memory load was varied on two levels. We used a four-choice flanker paradigm in which participants had to classify targets while ignoring flankers. Errors could be due to responding either to the flankers (flanker errors) or to none of the stimulus elements (nonflanker errors). With low load, the Ne/ERN was larger for flanker errors than for nonflanker errors-an effect that has previously been interpreted as reflecting differential significance of these error types. With high load, no such effect of error type on the Ne/ERN was observable. Our findings suggest that working memory load does not impair the generation of an Ne/ERN per se but rather impairs the evaluation of error significance. They demonstrate that error monitoring is composed of capacity-dependent and capacity-independent mechanisms. © 2017 Society for Psychophysiological Research.

  13. IMRT QA: Selecting gamma criteria based on error detection sensitivity.

    PubMed

    Steers, Jennifer M; Fraass, Benedick A

    2016-04-01

    The gamma comparison is widely used to evaluate the agreement between measurements and treatment planning system calculations in patient-specific intensity modulated radiation therapy (IMRT) quality assurance (QA). However, recent publications have raised concerns about the lack of sensitivity when employing commonly used gamma criteria. Understanding the actual sensitivity of a wide range of different gamma criteria may allow the definition of more meaningful gamma criteria and tolerance limits in IMRT QA. We present a method that allows the quantitative determination of gamma criteria sensitivity to induced errors which can be applied to any unique combination of device, delivery technique, and software utilized in a specific clinic. A total of 21 DMLC IMRT QA measurements (ArcCHECK®, Sun Nuclear) were compared to QA plan calculations with induced errors. Three scenarios were studied: MU errors, multi-leaf collimator (MLC) errors, and the sensitivity of the gamma comparison to changes in penumbra width. Gamma comparisons were performed between measurements and error-induced calculations using a wide range of gamma criteria, resulting in a total of over 20 000 gamma comparisons. Gamma passing rates for each error class and case were graphed against error magnitude to create error curves in order to represent the range of missed errors in routine IMRT QA using 36 different gamma criteria. This study demonstrates that systematic errors and case-specific errors can be detected by the error curve analysis. Depending on the location of the error curve peak (e.g., not centered about zero), 3%/3 mm threshold = 10% at 90% pixels passing may miss errors as large as 15% MU errors and ±1 cm random MLC errors for some cases. As the dose threshold parameter was increased for a given %Diff/distance-to-agreement (DTA) setting, error sensitivity was increased by up to a factor of two for select cases. This increased sensitivity with increasing dose threshold was consistent across all studied combinations of %Diff/DTA. Criteria such as 2%/3 mm and 3%/2 mm with a 50% threshold at 90% pixels passing are shown to be more appropriately sensitive without being overly strict. However, a broadening of the penumbra by as much as 5 mm in the beam configuration was difficult to detect with commonly used criteria, as well as with the previously mentioned criteria utilizing a threshold of 50%. We have introduced the error curve method, an analysis technique which allows the quantitative determination of gamma criteria sensitivity to induced errors. The application of the error curve method using DMLC IMRT plans measured on the ArcCHECK® device demonstrated that large errors can potentially be missed in IMRT QA with commonly used gamma criteria (e.g., 3%/3 mm, threshold = 10%, 90% pixels passing). Additionally, increasing the dose threshold value can offer dramatic increases in error sensitivity. This approach may allow the selection of more meaningful gamma criteria for IMRT QA and is straightforward to apply to other combinations of devices and treatment techniques.

  14. The solar vector error within the SNPP Common GEO code, the correction, and the effects on the VIIRS SDR RSB calibration

    NASA Astrophysics Data System (ADS)

    Fulbright, Jon; Anderson, Samuel; Lei, Ning; Efremova, Boryana; Wang, Zhipeng; McIntire, Jeffrey; Chiang, Kwofu; Xiong, Xiaoxiong

    2014-11-01

    Due to a software error, the solar and lunar vectors reported in the on-board calibrator intermediate product (OBC-IP) files for SNPP VIIRS are incorrect. The magnitude of the error is about 0.2 degree, and the magnitude is increasing by about 0.01 degree per year. This error, although small, has an effect on the radiometric calibration of the reflective solar bands (RSB) because accurate solar angles are required for calculating the screen transmission functions and for calculating the illumination of the Solar Diffuser panel. In this paper, we describe the error in the Common GEO code, and how it may be fixed. We present evidence for the error from within the OBC-IP data. We also describe the effects of the solar vector error on the RSB calibration and the Sensor Data Record (SDR). In order to perform this evaluation, we have reanalyzed the yaw-maneuver data to compute the vignetting functions required for the on-orbit SD RSB radiometric calibration. After the reanalysis, we find effect of up to 0.5% on the shortwave infrared (SWIR) RSB calibration.

  15. The science of medical decision making: neurosurgery, errors, and personal cognitive strategies for improving quality of care.

    PubMed

    Fargen, Kyle M; Friedman, William A

    2014-01-01

    During the last 2 decades, there has been a shift in the U.S. health care system towards improving the quality of health care provided by enhancing patient safety and reducing medical errors. Unfortunately, surgical complications, patient harm events, and malpractice claims remain common in the field of neurosurgery. Many of these events are potentially avoidable. There are an increasing number of publications in the medical literature in which authors address cognitive errors in diagnosis and treatment and strategies for reducing such errors, but these are for the most part absent in the neurosurgical literature. The purpose of this article is to highlight the complexities of medical decision making to a neurosurgical audience, with the hope of providing insight into the biases that lead us towards error and strategies to overcome our innate cognitive deficiencies. To accomplish this goal, we review the current literature on medical errors and just culture, explain the dual process theory of cognition, identify common cognitive errors affecting neurosurgeons in practice, review cognitive debiasing strategies, and finally provide simple methods that can be easily assimilated into neurosurgical practice to improve clinical decision making. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Temporal lobe stimulation reveals anatomic distinction between auditory naming processes.

    PubMed

    Hamberger, M J; Seidel, W T; Goodman, R R; Perrine, K; McKhann, G M

    2003-05-13

    Language errors induced by cortical stimulation can provide insight into function(s) supported by the area stimulated. The authors observed that some stimulation-induced errors during auditory description naming were characterized by tip-of-the-tongue responses or paraphasic errors, suggesting expressive difficulty, whereas others were qualitatively different, suggesting receptive difficulty. They hypothesized that these two response types reflected disruption at different stages of auditory verbal processing and that these "subprocesses" might be supported by anatomically distinct cortical areas. To explore the topographic distribution of error types in auditory verbal processing. Twenty-one patients requiring left temporal lobe surgery underwent preresection language mapping using direct cortical stimulation. Auditory naming was tested at temporal sites extending from 1 cm from the anterior tip to the parietal operculum. Errors were dichotomized as either "expressive" or "receptive." The topographic distribution of error types was explored. Sites associated with the two error types were topographically distinct from one another. Most receptive sites were located in the middle portion of the superior temporal gyrus (STG), whereas most expressive sites fell outside this region, scattered along lateral temporal and temporoparietal cortex. Results raise clinical questions regarding the inclusion of the STG in temporal lobe epilepsy surgery and suggest that more detailed cortical mapping might enable better prediction of postoperative language decline. From a theoretical perspective, results carry implications regarding the understanding of structure-function relations underlying temporal lobe mediation of auditory language processing.

  17. Improving cancer patient emergency room utilization: A New Jersey state assessment.

    PubMed

    Scholer, Anthony J; Mahmoud, Omar M; Ghosh, Debopyria; Schwartzman, Jacob; Farooq, Mohammed; Cabrera, Javier; Wieder, Robert; Adam, Nabil R; Chokshi, Ravi J

    2017-12-01

    Due to its increasing incidence and its major contribution to healthcare costs, cancer is a major public health problem in the United States. The impact across different services is not well documented and utilization of emergency departments (ED) by cancer patients is not well characterized. The aim of our study was to identify factors that can be addressed to improve the appropriate delivery of quality cancer care thereby reducing ED utilization, decreasing hospitalizations and reducing the related healthcare costs. The New Jersey State Inpatient and Emergency Department Databases were used to identify the primary outcome variables; patient disposition and readmission rates. The independent variables were demographics, payer and clinical characteristics. Multivariable unconditional logistic regression models using clinical and demographic data were used to predict hospital admission or emergency department return. A total of 37,080 emergency department visits were cancer related with the most common diagnosis attributed to lung cancer (30.0%) and the most common presentation was pain. The disposition of patients who visit the ED due to cancer related issues is significantly affected by the factors of race (African American OR=0.6, p value=0.02 and Hispanic OR=0.5, p value=0.02, respectively), age aged 65 to 75years (SNF/ICF OR 2.35, p value=0.00 and Home Healthcare Service OR 5.15, p value=0.01, respectively), number of diagnoses (OR 1.26, p value=0.00), insurance payer (SNF/ICF OR 2.2, p value=0.02 and Home Healthcare Services OR 2.85, p value=0.07, respectively) and type of cancer (breast OR 0.54, p value=0.01, prostate OR 0.56, p value=0.01, uterine OR 0.37, p value=0.02, and other OR 0.62, p value=0.05, respectively). In addition, comorbidities increased the likelihood of death, being transferred to SNF/ICF, or utilization of home healthcare services (OR 1.6, p value=0.00, OR 1.18, p value=0.00, and OR 1.16, p value=0.04, respectively). Readmission is significantly affected by race (American Americans OR 0.41, standard error 0.08, p value=0.001 and Hispanics OR 0.29, standard error 0.11, p value=0.01, respectively), income (Quartile 2 OR 0.98, standard error 0.14, p value 0.01, Quartile 3 OR 1.07, standard error 0.13, p value 0.01, and Quartile 4 OR 0.88, standard error 0.12, p value 0.01, respectively), and type of cancer (prostate OR 0.25, standard error 0.09, p value=0.001). Web based symptom questionnaires, patient navigators, end of life nursing and clinical cancer pathways can identify, guide and prompt early initiation of treat before progression of symptoms in cancer patients most likely to visit the ED. Thus, improving cancer patient satisfaction, outcomes and reduce health care costs. Published by Elsevier Ltd.

  18. A sensitivity analysis of nine diversity and seven similarity indices

    USGS Publications Warehouse

    Boyle, Terrence P.; Smillie, Gary M.; Anderson, Jana C.; Beeson, David R.

    1990-01-01

    Indices summarizing community structure are used to evaluate fundamental community ecology, species interaction, biogeographical factors, and environmental stress. Some of these indices are insensitive to gross community changes induced by contaminants of pollution. Sixteen indices commonly used to assess the status of aquatic communities in water quality studies were evaluated using computer simulation techniques to determine specific index responses. Three communities of different initial structure (19 species, 38 species, and 83 species) were generated using the lognormal equation. Each community was then perturbed in three ways: common species disproportionally reduced, all species proportionally reduced, and rare species disproportionally reduced. The behavior of the indices was analyzed graphically and differential response due to initial community structure and type of community change was documented. Some recommendations of potential sources of error using community levels indices were developed.

  19. Errors in accident data, its types, causes and methods of rectification-analysis of the literature.

    PubMed

    Ahmed, Ashar; Sadullah, Ahmad Farhan Mohd; Yahya, Ahmad Shukri

    2017-07-29

    Most of the decisions taken to improve road safety are based on accident data, which makes it the back bone of any country's road safety system. Errors in this data will lead to misidentification of black spots and hazardous road segments, projection of false estimates pertinent to accidents and fatality rates, and detection of wrong parameters responsible for accident occurrence, thereby making the entire road safety exercise ineffective. Its extent varies from country to country depending upon various factors. Knowing the type of error in the accident data and the factors causing it enables the application of the correct method for its rectification. Therefore there is a need for a systematic literature review that addresses the topic at a global level. This paper fulfils the above research gap by providing a synthesis of literature for the different types of errors found in the accident data of 46 countries across the six regions of the world. The errors are classified and discussed with respect to each type and analysed with respect to income level; assessment with regard to the magnitude for each type is provided; followed by the different causes that result in their occurrence, and the various methods used to address each type of error. Among high-income countries the extent of error in reporting slight, severe, non-fatal and fatal injury accidents varied between 39-82%, 16-52%, 12-84%, and 0-31% respectively. For middle-income countries the error for the same categories varied between 93-98%, 32.5-96%, 34-99% and 0.5-89.5% respectively. The only four studies available for low-income countries showed that the error in reporting non-fatal and fatal accidents varied between 69-80% and 0-61% respectively. The logistic relation of error in accident data reporting, dichotomised at 50%, indicated that as the income level of a country increases the probability of having less error in accident data also increases. Average error in recording information related to the variables in the categories of location, victim's information, vehicle's information, and environment was 27%, 37%, 16% and 19% respectively. Among the causes identified for errors in accident data reporting, Policing System was found to be the most important. Overall 26 causes of errors in accident data were discussed out of which 12 were related to reporting and 14 were related to recording. "Capture-Recapture" was the most widely used method among the 11 different methods: that can be used for the rectification of under-reporting. There were 12 studies pertinent to the rectification of accident location and almost all of them utilised a Geographical Information System (GIS) platform coupled with a matching algorithm to estimate the correct location. It is recommended that the policing system should be reformed and public awareness should be created to help reduce errors in accident data. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Spatial Assessment of Model Errors from Four Regression Techniques

    Treesearch

    Lianjun Zhang; Jeffrey H. Gove; Jeffrey H. Gove

    2005-01-01

    Fomst modelers have attempted to account for the spatial autocorrelations among trees in growth and yield models by applying alternative regression techniques such as linear mixed models (LMM), generalized additive models (GAM), and geographicalIy weighted regression (GWR). However, the model errors are commonly assessed using average errors across the entire study...

Top