Sample records for specific error types

  1. Imagery of Errors in Typing

    ERIC Educational Resources Information Center

    Rieger, Martina; Martinez, Fanny; Wenke, Dorit

    2011-01-01

    Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…

  2. Classification and reduction of pilot error

    NASA Technical Reports Server (NTRS)

    Rogers, W. H.; Logan, A. L.; Boley, G. D.

    1989-01-01

    Human error is a primary or contributing factor in about two-thirds of commercial aviation accidents worldwide. With the ultimate goal of reducing pilot error accidents, this contract effort is aimed at understanding the factors underlying error events and reducing the probability of certain types of errors by modifying underlying factors such as flight deck design and procedures. A review of the literature relevant to error classification was conducted. Classification includes categorizing types of errors, the information processing mechanisms and factors underlying them, and identifying factor-mechanism-error relationships. The classification scheme developed by Jens Rasmussen was adopted because it provided a comprehensive yet basic error classification shell or structure that could easily accommodate addition of details on domain-specific factors. For these purposes, factors specific to the aviation environment were incorporated. Hypotheses concerning the relationship of a small number of underlying factors, information processing mechanisms, and error types types identified in the classification scheme were formulated. ASRS data were reviewed and a simulation experiment was performed to evaluate and quantify the hypotheses.

  3. Linguistic pattern analysis of misspellings of typically developing writers in grades 1-9.

    PubMed

    Bahr, Ruth Huntley; Sillian, Elaine R; Berninger, Virginia W; Dow, Michael

    2012-12-01

    A mixed-methods approach, evaluating triple word-form theory, was used to describe linguistic patterns of misspellings. Spelling errors were taken from narrative and expository writing samples provided by 888 typically developing students in Grades 1-9. Errors were coded by category (phonological, orthographic, and morphological) and specific linguistic feature affected. Grade-level effects were analyzed with trend analysis. Qualitative analyses determined frequent error types and how use of specific linguistic features varied across grades. Phonological, orthographic, and morphological errors were noted across all grades, but orthographic errors predominated. Linear trends revealed developmental shifts in error proportions for the orthographic and morphological categories between Grades 4 and 5. Similar error types were noted across age groups, but the nature of linguistic feature error changed with age. Triple word-form theory was supported. By Grade 1, orthographic errors predominated, and phonological and morphological error patterns were evident. Morphological errors increased in relative frequency in older students, probably due to a combination of word-formation issues and vocabulary growth. These patterns suggest that normal spelling development reflects nonlinear growth and that it takes a long time to develop a robust orthographic lexicon that coordinates phonology, orthography, and morphology and supports word-specific, conventional spelling.

  4. Quantitative evaluation of patient-specific quality assurance using online dosimetry system

    NASA Astrophysics Data System (ADS)

    Jung, Jae-Yong; Shin, Young-Ju; Sohn, Seung-Chang; Min, Jung-Whan; Kim, Yon-Lae; Kim, Dong-Su; Choe, Bo-Young; Suh, Tae-Suk

    2018-01-01

    In this study, we investigated the clinical performance of an online dosimetry system (Mobius FX system, MFX) by 1) dosimetric plan verification using gamma passing rates and dose volume metrics and 2) error-detection capability evaluation by deliberately introduced machine error. Eighteen volumetric modulated arc therapy (VMAT) plans were studied. To evaluate the clinical performance of the MFX, we used gamma analysis and dose volume histogram (DVH) analysis. In addition, to evaluate the error-detection capability, we used gamma analysis and DVH analysis utilizing three types of deliberately introduced errors (Type 1: gantry angle-independent multi-leaf collimator (MLC) error, Type 2: gantry angle-dependent MLC error, and Type 3: gantry angle error). A dosimetric verification comparison of physical dosimetry system (Delt4PT) and online dosimetry system (MFX), gamma passing rates of the two dosimetry systems showed very good agreement with treatment planning system (TPS) calculation. For the average dose difference between the TPS calculation and the MFX measurement, most of the dose metrics showed good agreement within a tolerance of 3%. For the error-detection comparison of Delta4PT and MFX, the gamma passing rates of the two dosimetry systems did not meet the 90% acceptance criterion with the magnitude of error exceeding 2 mm and 1.5 ◦, respectively, for error plans of Types 1, 2, and 3. For delivery with all error types, the average dose difference of PTV due to error magnitude showed good agreement between calculated TPS and measured MFX within 1%. Overall, the results of the online dosimetry system showed very good agreement with those of the physical dosimetry system. Our results suggest that a log file-based online dosimetry system is a very suitable verification tool for accurate and efficient clinical routines for patient-specific quality assurance (QA).

  5. Assessing explicit error reporting in the narrative electronic medical record using keyword searching.

    PubMed

    Cao, Hui; Stetson, Peter; Hripcsak, George

    2003-01-01

    In this study, we assessed the explicit reporting of medical errors in the electronic record. We looked for cases in which the provider explicitly stated that he or she or another provider had committed an error. The advantage of the technique is that it is not limited to a specific type of error. Our goals were to 1) measure the rate at which medical errors were documented in medical records, and 2) characterize the types of errors that were reported.

  6. Patients' perception of types of errors in palliative care - results from a qualitative interview study.

    PubMed

    Kiesewetter, Isabel; Schulz, Christian; Bausewein, Claudia; Fountain, Rita; Schmitz, Andrea

    2016-08-11

    Medical errors have been recognized as a relevant public health concern and research efforts to improve patient safety have increased. In palliative care, however, studies on errors are rare and mainly focus on quantitative measures. We aimed to explore how palliative care patients perceive and think about errors in palliative care and to generate an understanding of patients' perception of errors in that specialty. A semistructured qualitative interview study was conducted with patients who had received at least 1 week of palliative care in an inpatient or outpatient setting. All interviews were transcribed verbatim and analysed according to qualitative content analysis. Twelve patients from two centers were interviewed (7 women, median age 63.5 years, range 22-90 years). Eleven patients suffered from a malignancy. Days in palliative care ranged from 10 to 180 days (median 28 days). 96 categories emerged which were summed up under 11 umbrella terms definition, difference, type, cause, consequence, meaning, recognition, handling, prevention, person causing and affected person. A deductive model was developed assigning umbrella terms to error-theory-based factor levels (definition, type and process-related factors). 23 categories for type of error were identified, including 12 categories that can be considered as palliative care specific. On the level of process-related factors 3 palliative care specific categories emerged (recognition, meaning and consequence of errors). From the patients' perspective, there are some aspects of errors that could be considered as specific to palliative care. As the results of our study suggest, these palliative care-specific aspects seem to be very important from the patients' point of view and should receive further investigation. Moreover, the findings of this study can serve as a guide to further assess single aspects or categories of errors in palliative care in future research.

  7. Linguistic Pattern Analysis of Misspellings of Typically Developing Writers in Grades 1 to 9

    PubMed Central

    Bahr, Ruth Huntley; Silliman, Elaine R.; Berninger, Virginia W.; Dow, Michael

    2012-01-01

    Purpose A mixed methods approach, evaluating triple word form theory, was used to describe linguistic patterns of misspellings. Method Spelling errors were taken from narrative and expository writing samples provided by 888 typically developing students in grades 1–9. Errors were coded by category (phonological, orthographic, and morphological) and specific linguistic feature affected. Grade level effects were analyzed with trend analysis. Qualitative analyses determined frequent error types and how use of specific linguistic features varied across grades. Results Phonological, orthographic, and morphological errors were noted across all grades, but orthographic errors predominated. Linear trends revealed developmental shifts in error proportions for the orthographic and morphological categories between grades 4–5. Similar error types were noted across age groups but the nature of linguistic feature error changed with age. Conclusions Triple word-form theory was supported. By grade 1, orthographic errors predominated and phonological and morphological error patterns were evident. Morphological errors increased in relative frequency in older students, probably due to a combination of word-formation issues and vocabulary growth. These patterns suggest that normal spelling development reflects non-linear growth and that it takes a long time to develop a robust orthographic lexicon that coordinates phonology, orthography, and morphology and supports word-specific, conventional spelling. PMID:22473834

  8. A cognitive taxonomy of medical errors.

    PubMed

    Zhang, Jiajie; Patel, Vimla L; Johnson, Todd R; Shortliffe, Edward H

    2004-06-01

    Propose a cognitive taxonomy of medical errors at the level of individuals and their interactions with technology. Use cognitive theories of human error and human action to develop the theoretical foundations of the taxonomy, develop the structure of the taxonomy, populate the taxonomy with examples of medical error cases, identify cognitive mechanisms for each category of medical error under the taxonomy, and apply the taxonomy to practical problems. Four criteria were used to evaluate the cognitive taxonomy. The taxonomy should be able (1) to categorize major types of errors at the individual level along cognitive dimensions, (2) to associate each type of error with a specific underlying cognitive mechanism, (3) to describe how and explain why a specific error occurs, and (4) to generate intervention strategies for each type of error. The proposed cognitive taxonomy largely satisfies the four criteria at a theoretical and conceptual level. Theoretically, the proposed cognitive taxonomy provides a method to systematically categorize medical errors at the individual level along cognitive dimensions, leads to a better understanding of the underlying cognitive mechanisms of medical errors, and provides a framework that can guide future studies on medical errors. Practically, it provides guidelines for the development of cognitive interventions to decrease medical errors and foundation for the development of medical error reporting system that not only categorizes errors but also identifies problems and helps to generate solutions. To validate this model empirically, we will next be performing systematic experimental studies.

  9. Error Type and Lexical Frequency Effects: Error Detection in Swedish Children with Language Impairment

    ERIC Educational Resources Information Center

    Hallin, Anna Eva; Reuterskiöld, Christina

    2017-01-01

    Purpose: The first aim of this study was to investigate if Swedish-speaking school-age children with language impairment (LI) show specific morphosyntactic vulnerabilities in error detection. The second aim was to investigate the effects of lexical frequency on error detection, an overlooked aspect of previous error detection studies. Method:…

  10. Associations between errors and contributing factors in aircraft maintenance

    NASA Technical Reports Server (NTRS)

    Hobbs, Alan; Williamson, Ann

    2003-01-01

    In recent years cognitive error models have provided insights into the unsafe acts that lead to many accidents in safety-critical environments. Most models of accident causation are based on the notion that human errors occur in the context of contributing factors. However, there is a lack of published information on possible links between specific errors and contributing factors. A total of 619 safety occurrences involving aircraft maintenance were reported using a self-completed questionnaire. Of these occurrences, 96% were related to the actions of maintenance personnel. The types of errors that were involved, and the contributing factors associated with those actions, were determined. Each type of error was associated with a particular set of contributing factors and with specific occurrence outcomes. Among the associations were links between memory lapses and fatigue and between rule violations and time pressure. Potential applications of this research include assisting with the design of accident prevention strategies, the estimation of human error probabilities, and the monitoring of organizational safety performance.

  11. Definition of an Acceptable Glass composition Region (AGCR) via an Index System and a Partitioning Function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peeler, D. K.; Taylor, A. S.; Edwards, T.B.

    2005-06-26

    The objective of this investigation was to appeal to the available ComPro{trademark} database of glass compositions and measured PCTs that have been generated in the study of High Level Waste (HLW)/Low Activity Waste (LAW) glasses to define an Acceptable Glass Composition Region (AGCR). The term AGCR refers to a glass composition region in which the durability response (as defined by the Product Consistency Test (PCT)) is less than some pre-defined, acceptable value that satisfies the Waste Acceptance Product Specifications (WAPS)--a value of 10 g/L was selected for this study. To assess the effectiveness of a specific classification or index systemmore » to differentiate between acceptable and unacceptable glasses, two types of errors (Type I and Type II errors) were monitored. A Type I error reflects that a glass with an acceptable durability response (i.e., a measured NL [B] < 10 g/L) is classified as unacceptable by the system of composition-based constraints. A Type II error occurs when a glass with an unacceptable durability response is classified as acceptable by the system of constraints. Over the course of the efforts to meet this objective, two approaches were assessed. The first (referred to as the ''Index System'') was based on the use of an evolving system of compositional constraints which were used to explore the possibility of defining an AGCR. This approach was primarily based on ''glass science'' insight to establish the compositional constraints. Assessments of the Brewer and Taylor Index Systems did not result in the definition of an AGCR. Although the Taylor Index System minimized Type I errors which allowed access to composition regions of interest to improve melt rate or increase waste loadings for DWPF as compared to the current durability model, Type II errors were also committed. In the context of the application of a particular classification system in the process control system, Type II errors are much more serious than Type I errors. A Type I error only reflects that the particular constraint system being used is overly conservative (i.e., its application restricts access to glasses that have an acceptable measured durability response). A Type II error results in a more serious misclassification that could result in allowing the transfer of a Slurry Mix Evaporator (SME) batch to the melter, which is predicted to produce a durable product based on the specific system applied but in reality does not meet the defined ''acceptability'' criteria. More specifically, a nondurable product could be produced in DWPF. Given the presence of Type II errors, the Index System approach was deemed inadequate for further implementation consideration at the DWPF. The second approach (the JMP partitioning process) was purely data driven and empirically derived--glass science was not a factor. In this approach, the collection of composition--durability data in ComPro was sequentially partitioned or split based on the best available specific criteria and variables. More specifically, the JMP software chose the oxide (Al{sub 2}O{sub 3} for this dataset) that most effectively partitions the PCT responses (NL [B]'s)--perhaps not 100% effective based on a single oxide. Based on this initial split, a second request was made to split a particular set of the ''Y'' values (good or bad PCTs based on the 10 g/L limit) based on the next most critical ''X'' variable. This ''splitting'' or ''partitioning'' process was repeated until an AGCR was defined based on the use of only 3 oxides (Al{sub 2}O{sub 3}, CaO, and MgO) and critical values of > 3.75 wt% Al{sub 2}O{sub 3}, {ge} 0.616 wt% CaO, and < 3.521 wt% MgO. Using this set of criteria, the ComPro database was partitioned in which no Type II errors were committed. The automated partitioning function screened or removed 978 of the 2406 ComPro glasses which did cause some initial concerns regarding excessive conservatism regardless of its ability to identify an AGCR. However, a preliminary review of glasses within the 1428 ''acceptable'' glasses defining the ACGR includes glass systems of interest to support the accelerated mission.« less

  12. Verb inflection in monolingual Dutch and sequential bilingual Turkish-Dutch children with and without SLI.

    PubMed

    Blom, Elma; de Jong, Jan; Orgassa, Antje; Baker, Anne; Weerman, Fred

    2013-01-01

    Both children with specific language impairment (SLI) and children who acquire a second language (L2) make errors with verb inflection. This overlap between SLI and L2 raises the question if verb inflection can discriminate between L2 children with and without SLI. In this study we addressed this question for Dutch. The secondary goal of the study was to investigate variation in error types and error profiles across groups. Data were collected from 6-8-year-old children with SLI who acquire Dutch as their first language (L1), Dutch L1 children with a typical development (TD), Dutch L2 children with SLI, and Dutch L1 TD children who were on average 2 years younger. An experimental elicitation task was employed that tested use of verb inflection; context (3SG, 3PL) was manipulated and word order and verb type were controlled. Accuracy analyses revealed effects of impairment in both L1 and L2 children with SLI. However, individual variation indicated that there is no specific error profile for SLI. Verb inflection use as measured in our study discriminated fairly well in the L1 group but classification was less accurate in the L2 group. Between-group differences emerged furthermore for certain types of errors, but all groups also showed considerable variation in errors and there was not a specific error profile that distinguished SLI from TD. © 2013 Royal College of Speech and Language Therapists.

  13. Type I error probabilities based on design-stage strategies with applications to noninferiority trials.

    PubMed

    Rothmann, Mark

    2005-01-01

    When testing the equality of means from two different populations, a t-test or large sample normal test tend to be performed. For these tests, when the sample size or design for the second sample is dependent on the results of the first sample, the type I error probability is altered for each specific possibility in the null hypothesis. We will examine the impact on the type I error probabilities for two confidence interval procedures and procedures using test statistics when the design for the second sample or experiment is dependent on the results from the first sample or experiment (or series of experiments). Ways for controlling a desired maximum type I error probability or a desired type I error rate will be discussed. Results are applied to the setting of noninferiority comparisons in active controlled trials where the use of a placebo is unethical.

  14. Corrective Techniques and Future Directions for Treatment of Residual Refractive Error Following Cataract Surgery

    PubMed Central

    Moshirfar, Majid; McCaughey, Michael V; Santiago-Caban, Luis

    2015-01-01

    Postoperative residual refractive error following cataract surgery is not an uncommon occurrence for a large proportion of modern-day patients. Residual refractive errors can be broadly classified into 3 main categories: myopic, hyperopic, and astigmatic. The degree to which a residual refractive error adversely affects a patient is dependent on the magnitude of the error, as well as the specific type of intraocular lens the patient possesses. There are a variety of strategies for resolving residual refractive errors that must be individualized for each specific patient scenario. In this review, the authors discuss contemporary methods for rectification of residual refractive error, along with their respective indications/contraindications, and efficacies. PMID:25663845

  15. Corrective Techniques and Future Directions for Treatment of Residual Refractive Error Following Cataract Surgery.

    PubMed

    Moshirfar, Majid; McCaughey, Michael V; Santiago-Caban, Luis

    2014-12-01

    Postoperative residual refractive error following cataract surgery is not an uncommon occurrence for a large proportion of modern-day patients. Residual refractive errors can be broadly classified into 3 main categories: myopic, hyperopic, and astigmatic. The degree to which a residual refractive error adversely affects a patient is dependent on the magnitude of the error, as well as the specific type of intraocular lens the patient possesses. There are a variety of strategies for resolving residual refractive errors that must be individualized for each specific patient scenario. In this review, the authors discuss contemporary methods for rectification of residual refractive error, along with their respective indications/contraindications, and efficacies.

  16. ASCERTAINMENT OF ON-ROAD SAFETY ERRORS BASED ON VIDEO REVIEW

    PubMed Central

    Dawson, Jeffrey D.; Uc, Ergun Y.; Anderson, Steven W.; Dastrup, Elizabeth; Johnson, Amy M.; Rizzo, Matthew

    2011-01-01

    Summary Using an instrumented vehicle, we have studied several aspects of the on-road performance of healthy and diseased elderly drivers. One goal from such studies is to ascertain the type and frequency of driving safety errors. Because the judgment of such errors is somewhat subjective, we applied a taxonomy system of 15 general safety error categories and 76 specific safety error types. We also employed and trained professional driving instructors to review the video data of the on-road drives. In this report, we illustrate our rating system on a group of 111 drivers, ages 65 to 89. These drivers made errors in 13 of the 15 error categories, comprising 42 of the 76 error types. A mean (SD) of 35.8 (12.8) safety errors per drive were noted, with 2.1 (1.7) of them being judged as serious. Our methodology may be useful in applications such as intervention studies, and in longitudinal studies of changes in driving abilities in patients with declining cognitive ability. PMID:24273753

  17. Pilot-controller communication errors : an analysis of Aviation Safety Reporting System (ASRS) reports

    DOT National Transportation Integrated Search

    1998-08-01

    The purpose of this study was to identify the factors that contribute to pilot-controller communication errors. Resports submitted to the Aviation Safety Reporting System (ASRS) offer detailed accounts of specific types of errors and a great deal of ...

  18. Group-Specific Effects of Matching Subtest Contamination on the Identification of Differential Item Functioning

    ERIC Educational Resources Information Center

    Keiffer, Elizabeth Ann

    2011-01-01

    A differential item functioning (DIF) simulation study was conducted to explore the type and level of impact that contamination had on type I error and power rates in DIF analyses when the suspect item favored the same or opposite group as the DIF items in the matching subtest. Type I error and power rates were displayed separately for the…

  19. Errors and conflict at the task level and the response level.

    PubMed

    Desmet, Charlotte; Fias, Wim; Hartstra, Egbert; Brass, Marcel

    2011-01-26

    In the last decade, research on error and conflict processing has become one of the most influential research areas in the domain of cognitive control. There is now converging evidence that a specific part of the posterior frontomedian cortex (pFMC), the rostral cingulate zone (RCZ), is crucially involved in the processing of errors and conflict. However, error-related research has focused primarily on a specific error type, namely, response errors. The aim of the present study was to investigate whether errors on the task level rely on the same neural and functional mechanisms. Here we report a dissociation of both error types in the pFMC: whereas response errors activate the RCZ, task errors activate the dorsal frontomedian cortex. Although this last region shows an overlap in activation for task and response errors on the group level, a closer inspection of the single-subject data is more in accordance with a functional anatomical dissociation. When investigating brain areas related to conflict on the task and response levels, a clear dissociation was perceived between areas associated with response conflict and with task conflict. Overall, our data support a dissociation between response and task levels of processing in the pFMC. In addition, we provide additional evidence for a dissociation between conflict and errors both at the response level and at the task level.

  20. When do latent class models overstate accuracy for diagnostic and other classifiers in the absence of a gold standard?

    PubMed

    Spencer, Bruce D

    2012-06-01

    Latent class models are increasingly used to assess the accuracy of medical diagnostic tests and other classifications when no gold standard is available and the true state is unknown. When the latent class is treated as the true class, the latent class models provide measures of components of accuracy including specificity and sensitivity and their complements, type I and type II error rates. The error rates according to the latent class model differ from the true error rates, however, and empirical comparisons with a gold standard suggest the true error rates often are larger. We investigate conditions under which the true type I and type II error rates are larger than those provided by the latent class models. Results from Uebersax (1988, Psychological Bulletin 104, 405-416) are extended to accommodate random effects and covariates affecting the responses. The results are important for interpreting the results of latent class analyses. An error decomposition is presented that incorporates an error component from invalidity of the latent class model. © 2011, The International Biometric Society.

  1. The Environmental Context of Patient Safety and Medical Errors

    ERIC Educational Resources Information Center

    Wholey, Douglas; Moscovice, Ira; Hietpas, Terry; Holtzman, Jeremy

    2004-01-01

    The environmental context of patient safety and medical errors was explored with specific interest in rural settings. Special attention was paid to unique features of rural health care organizations and their environment that relate to the patient safety issue and medical errors (including the distribution of patients, types of adverse events…

  2. On the use of biomathematical models in patient-specific IMRT dose QA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhen Heming; Nelms, Benjamin E.; Tome, Wolfgang A.

    2013-07-15

    Purpose: To investigate the use of biomathematical models such as tumor control probability (TCP) and normal tissue complication probability (NTCP) as new quality assurance (QA) metrics.Methods: Five different types of error (MLC transmission, MLC penumbra, MLC tongue and groove, machine output, and MLC position) were intentionally induced to 40 clinical intensity modulated radiation therapy (IMRT) patient plans (20 H and N cases and 20 prostate cases) to simulate both treatment planning system errors and machine delivery errors in the IMRT QA process. The changes in TCP and NTCP for eight different anatomic structures (H and N: CTV, GTV, both parotids,more » spinal cord, larynx; prostate: CTV, rectal wall) were calculated as the new QA metrics to quantify the clinical impact on patients. The correlation between the change in TCP/NTCP and the change in selected DVH values was also evaluated. The relation between TCP/NTCP change and the characteristics of the TCP/NTCP curves is discussed.Results:{Delta}TCP and {Delta}NTCP were summarized for each type of induced error and each structure. The changes/degradations in TCP and NTCP caused by the errors vary widely depending on dose patterns unique to each plan, and are good indicators of each plan's 'robustness' to that type of error.Conclusions: In this in silico QA study the authors have demonstrated the possibility of using biomathematical models not only as patient-specific QA metrics but also as objective indicators that quantify, pretreatment, a plan's robustness with respect to possible error types.« less

  3. Identification of Error Types in Preservice Teachers' Attempts to Create Fraction Story Problems for Specified Operations

    ERIC Educational Resources Information Center

    McAllister, Cheryl J.; Beaver, Cheryl

    2012-01-01

    The purpose of this research was to determine if recognizable error types exist in the work of preservice teachers required to create story problems for specific fraction operations. Students were given a particular single-operation fraction expression and asked to do the calculation and then create a story problem that would require the use of…

  4. Mixtures of Berkson and classical covariate measurement error in the linear mixed model: Bias analysis and application to a study on ultrafine particles.

    PubMed

    Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette

    2018-05-01

    The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Technological Advancements and Error Rates in Radiation Therapy Delivery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Margalit, Danielle N., E-mail: dmargalit@partners.org; Harvard Cancer Consortium and Brigham and Women's Hospital/Dana Farber Cancer Institute, Boston, MA; Chen, Yu-Hui

    2011-11-15

    Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)-conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system atmore » Brigham and Women's Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher's exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01-0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08-0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique. There was a lower error rate with IMRT compared with 3D/conventional RT, highlighting the need for sustained vigilance against errors common to more traditional treatment techniques.« less

  6. Achievement Error Differences of Students with Reading versus Math Disorders

    ERIC Educational Resources Information Center

    Avitia, Maria; DeBiase, Emily; Pagirsky, Matthew; Root, Melissa M.; Howell, Meiko; Pan, Xingyu; Knupp, Tawnya; Liu, Xiaochen

    2017-01-01

    The purpose of this study was to understand and compare the types of errors students with a specific learning disability in reading and/or writing (SLD-R/W) and those with a specific learning disability in math (SLD-M) made in the areas of reading, writing, language, and mathematics. Clinical samples were selected from the norming population of…

  7. Identification and correction of systematic error in high-throughput sequence data

    PubMed Central

    2011-01-01

    Background A feature common to all DNA sequencing technologies is the presence of base-call errors in the sequenced reads. The implications of such errors are application specific, ranging from minor informatics nuisances to major problems affecting biological inferences. Recently developed "next-gen" sequencing technologies have greatly reduced the cost of sequencing, but have been shown to be more error prone than previous technologies. Both position specific (depending on the location in the read) and sequence specific (depending on the sequence in the read) errors have been identified in Illumina and Life Technology sequencing platforms. We describe a new type of systematic error that manifests as statistically unlikely accumulations of errors at specific genome (or transcriptome) locations. Results We characterize and describe systematic errors using overlapping paired reads from high-coverage data. We show that such errors occur in approximately 1 in 1000 base pairs, and that they are highly replicable across experiments. We identify motifs that are frequent at systematic error sites, and describe a classifier that distinguishes heterozygous sites from systematic error. Our classifier is designed to accommodate data from experiments in which the allele frequencies at heterozygous sites are not necessarily 0.5 (such as in the case of RNA-Seq), and can be used with single-end datasets. Conclusions Systematic errors can easily be mistaken for heterozygous sites in individuals, or for SNPs in population analyses. Systematic errors are particularly problematic in low coverage experiments, or in estimates of allele-specific expression from RNA-Seq data. Our characterization of systematic error has allowed us to develop a program, called SysCall, for identifying and correcting such errors. We conclude that correction of systematic errors is important to consider in the design and interpretation of high-throughput sequencing experiments. PMID:22099972

  8. Intonation and dialog context as constraints for speech recognition.

    PubMed

    Taylor, P; King, S; Isard, S; Wright, H

    1998-01-01

    This paper describes a way of using intonation and dialog context to improve the performance of an automatic speech recognition (ASR) system. Our experiments were run on the DCIEM Maptask corpus, a corpus of spontaneous task-oriented dialog speech. This corpus has been tagged according to a dialog analysis scheme that assigns each utterance to one of 12 "move types," such as "acknowledge," "query-yes/no" or "instruct." Most ASR systems use a bigram language model to constrain the possible sequences of words that might be recognized. Here we use a separate bigram language model for each move type. We show that when the "correct" move-specific language model is used for each utterance in the test set, the word error rate of the recognizer drops. Of course when the recognizer is run on previously unseen data, it cannot know in advance what move type the speaker has just produced. To determine the move type we use an intonation model combined with a dialog model that puts constraints on possible sequences of move types, as well as the speech recognizer likelihoods for the different move-specific models. In the full recognition system, the combination of automatic move type recognition with the move specific language models reduces the overall word error rate by a small but significant amount when compared with a baseline system that does not take intonation or dialog acts into account. Interestingly, the word error improvement is restricted to "initiating" move types, where word recognition is important. In "response" move types, where the important information is conveyed by the move type itself--for example, positive versus negative response--there is no word error improvement, but recognition of the response types themselves is good. The paper discusses the intonation model, the language models, and the dialog model in detail and describes the architecture in which they are combined.

  9. Issues with data and analyses: Errors, underlying themes, and potential solutions

    PubMed Central

    Allison, David B.

    2018-01-01

    Some aspects of science, taken at the broadest level, are universal in empirical research. These include collecting, analyzing, and reporting data. In each of these aspects, errors can and do occur. In this work, we first discuss the importance of focusing on statistical and data errors to continually improve the practice of science. We then describe underlying themes of the types of errors and postulate contributing factors. To do so, we describe a case series of relatively severe data and statistical errors coupled with surveys of some types of errors to better characterize the magnitude, frequency, and trends. Having examined these errors, we then discuss the consequences of specific errors or classes of errors. Finally, given the extracted themes, we discuss methodological, cultural, and system-level approaches to reducing the frequency of commonly observed errors. These approaches will plausibly contribute to the self-critical, self-correcting, ever-evolving practice of science, and ultimately to furthering knowledge. PMID:29531079

  10. Multiple statistical tests: Lessons from a d20.

    PubMed

    Madan, Christopher R

    2016-01-01

    Statistical analyses are often conducted with α= .05. When multiple statistical tests are conducted, this procedure needs to be adjusted to compensate for the otherwise inflated Type I error. In some instances in tabletop gaming, sometimes it is desired to roll a 20-sided die (or 'd20') twice and take the greater outcome. Here I draw from probability theory and the case of a d20, where the probability of obtaining any specific outcome is (1)/ 20, to determine the probability of obtaining a specific outcome (Type-I error) at least once across repeated, independent statistical tests.

  11. Using warnings to reduce categorical false memories in younger and older adults.

    PubMed

    Carmichael, Anna M; Gutchess, Angela H

    2016-07-01

    Warnings about memory errors can reduce their incidence, although past work has largely focused on associative memory errors. The current study sought to explore whether warnings could be tailored to specifically reduce false recall of categorical information in both younger and older populations. Before encoding word pairs designed to induce categorical false memories, half of the younger and older participants were warned to avoid committing these types of memory errors. Older adults who received a warning committed fewer categorical memory errors, as well as other types of semantic memory errors, than those who did not receive a warning. In contrast, young adults' memory errors did not differ for the warning versus no-warning groups. Our findings provide evidence for the effectiveness of warnings at reducing categorical memory errors in older adults, perhaps by supporting source monitoring, reduction in reliance on gist traces, or through effective metacognitive strategies.

  12. Error identification, disclosure, and reporting: practice patterns of three emergency medicine provider types.

    PubMed

    Hobgood, Cherri; Xie, Jipan; Weiner, Bryan; Hooker, James

    2004-02-01

    To gather preliminary data on how the three major types of emergency medicine (EM) providers, physicians, nurses (RNs), and out-of-hospital personnel (EMTs), differ in error identification, disclosure, and reporting. A convenience sample of emergency department (ED) providers completed a brief survey designed to evaluate error frequency, disclosure, and reporting practices as well as error-based discussion and educational activities. One hundred sixteen subjects participated: 41 EMTs (35%), 33 RNs (28%), and 42 physicians (36%). Forty-five percent of EMTs, 56% of RNs, and 21% of physicians identified no clinical errors during the preceding year. When errors were identified, physicians learned of them via dialogue with RNs (58%), patients (13%), pharmacy (35%), and attending physicians (35%). For known errors, all providers were equally unlikely to inform the team caring for the patient. Disclosure to patients was limited and varied by provider type (19% EMTs, 23% RNs, and 74% physicians). Disclosure education was rare, with

  13. Topographical gradients of semantics and phonology revealed by temporal lobe stimulation.

    PubMed

    Miozzo, Michele; Williams, Alicia C; McKhann, Guy M; Hamberger, Marla J

    2017-02-01

    Word retrieval is a fundamental component of oral communication, and it is well established that this function is supported by left temporal cortex. Nevertheless, the specific temporal areas mediating word retrieval and the particular linguistic processes these regions support have not been well delineated. Toward this end, we analyzed over 1000 naming errors induced by left temporal cortical stimulation in epilepsy surgery patients. Errors were primarily semantic (lemon → "pear"), phonological (horn → "corn"), non-responses, and delayed responses (correct responses after a delay), and each error type appeared predominantly in a specific region: semantic errors in mid-middle temporal gyrus (TG), phonological errors and delayed responses in middle and posterior superior TG, and non-responses in anterior inferior TG. To the extent that semantic errors, phonological errors and delayed responses reflect disruptions in different processes, our results imply topographical specialization of semantic and phonological processing. Specifically, results revealed an inferior-to-superior gradient, with more superior regions associated with phonological processing. Further, errors were increasingly semantically related to targets toward posterior temporal cortex. We speculate that detailed semantic input is needed to support phonological retrieval, and thus, the specificity of semantic input increases progressively toward posterior temporal regions implicated in phonological processing. Hum Brain Mapp 38:688-703, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  14. Improvement in error propagation in the Shack-Hartmann-type zonal wavefront sensors.

    PubMed

    Pathak, Biswajit; Boruah, Bosanta R

    2017-12-01

    Estimation of the wavefront from measured slope values is an essential step in a Shack-Hartmann-type wavefront sensor. Using an appropriate estimation algorithm, these measured slopes are converted into wavefront phase values. Hence, accuracy in wavefront estimation lies in proper interpretation of these measured slope values using the chosen estimation algorithm. There are two important sources of errors associated with the wavefront estimation process, namely, the slope measurement error and the algorithm discretization error. The former type is due to the noise in the slope measurements or to the detector centroiding error, and the latter is a consequence of solving equations of a basic estimation algorithm adopted onto a discrete geometry. These errors deserve particular attention, because they decide the preference of a specific estimation algorithm for wavefront estimation. In this paper, we investigate these two important sources of errors associated with the wavefront estimation algorithms of Shack-Hartmann-type wavefront sensors. We consider the widely used Southwell algorithm and the recently proposed Pathak-Boruah algorithm [J. Opt.16, 055403 (2014)JOOPDB0150-536X10.1088/2040-8978/16/5/055403] and perform a comparative study between the two. We find that the latter algorithm is inherently superior to the Southwell algorithm in terms of the error propagation performance. We also conduct experiments that further establish the correctness of the comparative study between the said two estimation algorithms.

  15. Impact of electronic chemotherapy order forms on prescribing errors at an urban medical center: results from an interrupted time-series analysis.

    PubMed

    Elsaid, K; Truong, T; Monckeberg, M; McCarthy, H; Butera, J; Collins, C

    2013-12-01

    To evaluate the impact of electronic standardized chemotherapy templates on incidence and types of prescribing errors. A quasi-experimental interrupted time series with segmented regression. A 700-bed multidisciplinary tertiary care hospital with an ambulatory cancer center. A multidisciplinary team including oncology physicians, nurses, pharmacists and information technologists. Standardized, regimen-specific, chemotherapy prescribing forms were developed and implemented over a 32-month period. Trend of monthly prevented prescribing errors per 1000 chemotherapy doses during the pre-implementation phase (30 months), immediate change in the error rate from pre-implementation to implementation and trend of errors during the implementation phase. Errors were analyzed according to their types: errors in communication or transcription, errors in dosing calculation and errors in regimen frequency or treatment duration. Relative risk (RR) of errors in the post-implementation phase (28 months) compared with the pre-implementation phase was computed with 95% confidence interval (CI). Baseline monthly error rate was stable with 16.7 prevented errors per 1000 chemotherapy doses. A 30% reduction in prescribing errors was observed with initiating the intervention. With implementation, a negative change in the slope of prescribing errors was observed (coefficient = -0.338; 95% CI: -0.612 to -0.064). The estimated RR of transcription errors was 0.74; 95% CI (0.59-0.92). The estimated RR of dosing calculation errors was 0.06; 95% CI (0.03-0.10). The estimated RR of chemotherapy frequency/duration errors was 0.51; 95% CI (0.42-0.62). Implementing standardized chemotherapy-prescribing templates significantly reduced all types of prescribing errors and improved chemotherapy safety.

  16. Evaluation of Approaches to Deal with Low-Frequency Nuisance Covariates in Population Pharmacokinetic Analyses.

    PubMed

    Lagishetty, Chakradhar V; Duffull, Stephen B

    2015-11-01

    Clinical studies include occurrences of rare variables, like genotypes, which due to their frequency and strength render their effects difficult to estimate from a dataset. Variables that influence the estimated value of a model-based parameter are termed covariates. It is often difficult to determine if such an effect is significant, since type I error can be inflated when the covariate is rare. Their presence may have either an insubstantial effect on the parameters of interest, hence are ignorable, or conversely they may be influential and therefore non-ignorable. In the case that these covariate effects cannot be estimated due to power and are non-ignorable, then these are considered nuisance, in that they have to be considered but due to type 1 error are of limited interest. This study assesses methods of handling nuisance covariate effects. The specific objectives include (1) calibrating the frequency of a covariate that is associated with type 1 error inflation, (2) calibrating its strength that renders it non-ignorable and (3) evaluating methods for handling these non-ignorable covariates in a nonlinear mixed effects model setting. Type 1 error was determined for the Wald test. Methods considered for handling the nuisance covariate effects were case deletion, Box-Cox transformation and inclusion of a specific fixed effects parameter. Non-ignorable nuisance covariates were found to be effectively handled through addition of a fixed effect parameter.

  17. Prevalence of medication errors in primary health care at Bahrain Defence Force Hospital – prescription-based study

    PubMed Central

    Aljasmi, Fatema; Almalood, Fatema

    2018-01-01

    Background One of the important activities that physicians – particularly general practitioners – perform is prescribing. It occurs in most health care facilities and especially in primary health care (PHC) settings. Objectives This study aims to determine what types of prescribing errors are made in PHC at Bahrain Defence Force (BDF) Hospital, and how common they are. Methods This was a retrospective study of data from PHC at BDF Hospital. The data consisted of 379 prescriptions randomly selected from the pharmacy between March and May 2013, and errors in the prescriptions were classified into five types: major omission, minor omission, commission, integration, and skill-related errors. Results Of the total prescriptions, 54.4% (N=206) were given to male patients and 45.6% (N=173) to female patients; 24.8% were given to patients under the age of 10 years. On average, there were 2.6 drugs per prescription. In the prescriptions, 8.7% of drugs were prescribed by their generic names, and 28% (N=106) of prescriptions included an antibiotic. Out of the 379 prescriptions, 228 had an error, and 44.3% (N=439) of the 992 prescribed drugs contained errors. The proportions of errors were as follows: 9.9% (N=38) were minor omission errors; 73.6% (N=323) were major omission errors; 9.3% (N=41) were commission errors; and 17.1% (N=75) were skill-related errors. Conclusion This study provides awareness of the presence of prescription errors and frequency of the different types of errors that exist in this hospital. Understanding the different types of errors could help future studies explore the causes of specific errors and develop interventions to reduce them. Further research should be conducted to understand the causes of these errors and demonstrate whether the introduction of electronic prescriptions has an effect on patient outcomes. PMID:29445304

  18. [Comparative studies of serological typing and HLA-A, B antigen genotyping with PCR using sequence-specific primers].

    PubMed

    Wu, Da-lin; Ling, Han-xin; Tang, Hao

    2004-11-01

    To evaluate the accuracy of PCR with sequence-specific primers (PCR-SSP) for HLA-I genotyping and analyze the causes of the errors occurring in the genotyping. DNA samples and were obtained from 34 clinical patients, and serological typing with monoclonal antibody (mAb) and HLA-A and, B antigen genotyping with PCR-SSP were performed. HLA-A and, B alleles were successfully typed in 34 clinical samples by mAb and PCR-SSP. No false positive or false negative results were found, and the erroneous and missed diagnosis rates were obviously higher in serological detection, being 23.5% for HLA-A and 26.5% for HLA-B. Error or confusion was more likely to occur in the antigens of A2 and A68, A32 and A33, B5, B60 and B61. DNA typing for HLA-I class (A, B antigens) by PCR-SSP has high resolution, high specificity, and good reproducibility, which is more suitable for clinical application than serological typing. PCR-SSP may accurately detect the alleles that are easily missed or mistaken in serological typing.

  19. Integrating photo-stimulable phosphor plates into dental and dental hygiene radiography curricula.

    PubMed

    Tax, Cara L; Robb, Christine L; Brillant, Martha G S; Doucette, Heather J

    2013-11-01

    It is not known whether the integration of photo-stimulable phosphor (PSP) plates into dental and dental hygiene curricula creates unique learning challenges for students. The purpose of this two-year study was to determine if dental hygiene students had more and/or different types of errors when using PSP plates compared to film and whether the PSP imaging plates had any particular characteristics that needed to be addressed in the learning process. Fifty-nine first-year dental hygiene students at one Canadian dental school were randomly assigned to two groups (PSP or film) before exposing their initial full mouth series on a teaching manikin using the parallel technique. The principal investigator determined the number and types of errors based on a specific set of performance criteria. The two groups (PSP vs. film) were compared for total number and type of errors made. Results of the study indicated the difference in the total number of errors made using PSP or film was not statistically significant; however, there was a difference in the types of errors made, with the PSP group having more horizontal errors than the film group. In addition, the study identified a number of unique characteristics of the PSP plates that required special consideration for teaching this technology.

  20. Auditing the Assignments of Top-Level Semantic Types in the UMLS Semantic Network to UMLS Concepts

    PubMed Central

    He, Zhe; Perl, Yehoshua; Elhanan, Gai; Chen, Yan; Geller, James; Bian, Jiang

    2018-01-01

    The Unified Medical Language System (UMLS) is an important terminological system. By the policy of its curators, each concept of the UMLS should be assigned the most specific Semantic Types (STs) in the UMLS Semantic Network (SN). Hence, the Semantic Types of most UMLS concepts are assigned at or near the bottom (leaves) of the UMLS Semantic Network. While most ST assignments are correct, some errors do occur. Therefore, Quality Assurance efforts of UMLS curators for ST assignments should concentrate on automatically detected sets of UMLS concepts with higher error rates than random sets. In this paper, we investigate the assignments of top-level semantic types in the UMLS semantic network to concepts, identify potential erroneous assignments, define four categories of errors, and thus provide assistance to curators of the UMLS to avoid these assignments errors. Human experts analyzed samples of concepts assigned 10 of the top-level semantic types and categorized the erroneous ST assignments into these four logical categories. Two thirds of the concepts assigned these 10 top-level semantic types are erroneous. Our results demonstrate that reviewing top-level semantic type assignments to concepts provides an effective way for UMLS quality assurance, comparing to reviewing a random selection of semantic type assignments. PMID:29375930

  1. Auditing the Assignments of Top-Level Semantic Types in the UMLS Semantic Network to UMLS Concepts.

    PubMed

    He, Zhe; Perl, Yehoshua; Elhanan, Gai; Chen, Yan; Geller, James; Bian, Jiang

    2017-11-01

    The Unified Medical Language System (UMLS) is an important terminological system. By the policy of its curators, each concept of the UMLS should be assigned the most specific Semantic Types (STs) in the UMLS Semantic Network (SN). Hence, the Semantic Types of most UMLS concepts are assigned at or near the bottom (leaves) of the UMLS Semantic Network. While most ST assignments are correct, some errors do occur. Therefore, Quality Assurance efforts of UMLS curators for ST assignments should concentrate on automatically detected sets of UMLS concepts with higher error rates than random sets. In this paper, we investigate the assignments of top-level semantic types in the UMLS semantic network to concepts, identify potential erroneous assignments, define four categories of errors, and thus provide assistance to curators of the UMLS to avoid these assignments errors. Human experts analyzed samples of concepts assigned 10 of the top-level semantic types and categorized the erroneous ST assignments into these four logical categories. Two thirds of the concepts assigned these 10 top-level semantic types are erroneous. Our results demonstrate that reviewing top-level semantic type assignments to concepts provides an effective way for UMLS quality assurance, comparing to reviewing a random selection of semantic type assignments.

  2. The Effect of Interruptions on Part 121 Air Carrier Operations

    NASA Technical Reports Server (NTRS)

    Damos, Diane L.

    1998-01-01

    The primary purpose of this study was to determine the relative priorities of various events and activities by examining the probability that a given activity was interrupted by a given event. The analysis will begin by providing frequency of interruption data by crew position (captain versus first officer) and event type. Any differences in the pattern of interruptions between the first officers and the captains will be explored and interpreted in terms of standard operating procedures. Subsequent data analyses will focus on comparing the frequency of interruptions for different types of activities and for the same activities under normal versus emergency conditions. Briefings and checklists will receive particular attention. The frequency with which specific activities are interrupted under multiple- versus single-task conditions also will be examined; because the majority of multiple-task data were obtained under laboratory conditions, LOFT-type tapes offer a unique opportunity to examine concurrent task performance under 'real-world' conditions. A second purpose of this study is to examine the effects of the interruptions on performance. More specifically, when possible, the time to resume specific activities will be compared to determine if pilots are slower to resume certain types of activities. Errors in resumption or failures to resume specific activities will be noted and any patterns in these errors will be identified. Again, particular attention will be given to the effects of interruptions on the completion of checklists and briefings. Other types of errors and missed events (i.e., the crew should have responded to the event but did not) will be examined. Any methodology using interruptions to examine task prioritization must be able to identify when an interruption has occurred and describe the ongoing activities that were interrupted. Both of these methodological problems are discussed In detail in the following section,

  3. The selective power of causality on memory errors.

    PubMed

    Marsh, Jessecae K; Kulkofsky, Sarah

    2015-01-01

    We tested the influence of causal links on the production of memory errors in a misinformation paradigm. Participants studied a set of statements about a person, which were presented as either individual statements or pairs of causally linked statements. Participants were then provided with causally plausible and causally implausible misinformation. We hypothesised that studying information connected with causal links would promote representing information in a more abstract manner. As such, we predicted that causal information would not provide an overall protection against memory errors, but rather would preferentially help in the rejection of misinformation that was causally implausible, given the learned causal links. In two experiments, we measured whether the causal linkage of information would be generally protective against all memory errors or only selectively protective against certain types of memory errors. Causal links helped participants reject implausible memory lures, but did not protect against plausible lures. Our results suggest that causal information may promote an abstract storage of information that helps prevent only specific types of memory errors.

  4. Onorbit IMU alignment error budget

    NASA Technical Reports Server (NTRS)

    Corson, R. W.

    1980-01-01

    The Star Tracker, Crew Optical Alignment Sight (COAS), and Inertial Measurement Unit (IMU) from a complex navigation system with a multitude of error sources were combined. A complete list of the system errors is presented. The errors were combined in a rational way to yield an estimate of the IMU alignment accuracy for STS-1. The expected standard deviation in the IMU alignment error for STS-1 type alignments was determined to be 72 arc seconds per axis for star tracker alignments and 188 arc seconds per axis for COAS alignments. These estimates are based on current knowledge of the star tracker, COAS, IMU, and navigation base error specifications, and were partially verified by preliminary Monte Carlo analysis.

  5. Correcting evaluation bias of relational classifiers with network cross validation

    DOE PAGES

    Neville, Jennifer; Gallagher, Brian; Eliassi-Rad, Tina; ...

    2011-01-04

    Recently, a number of modeling techniques have been developed for data mining and machine learning in relational and network domains where the instances are not independent and identically distributed (i.i.d.). These methods specifically exploit the statistical dependencies among instances in order to improve classification accuracy. However, there has been little focus on how these same dependencies affect our ability to draw accurate conclusions about the performance of the models. More specifically, the complex link structure and attribute dependencies in relational data violate the assumptions of many conventional statistical tests and make it difficult to use these tests to assess themore » models in an unbiased manner. In this work, we examine the task of within-network classification and the question of whether two algorithms will learn models that will result in significantly different levels of performance. We show that the commonly used form of evaluation (paired t-test on overlapping network samples) can result in an unacceptable level of Type I error. Furthermore, we show that Type I error increases as (1) the correlation among instances increases and (2) the size of the evaluation set increases (i.e., the proportion of labeled nodes in the network decreases). Lastly, we propose a method for network cross-validation that combined with paired t-tests produces more acceptable levels of Type I error while still providing reasonable levels of statistical power (i.e., 1–Type II error).« less

  6. A false positive food chain error associated with a generic predator gut content ELISA

    USDA-ARS?s Scientific Manuscript database

    Conventional prey-specific gut content ELISA and PCR assays are useful for identifying predators of insect pests in nature. However, these assays are prone to yielding certain types of food chain errors. For instance, it is possible that prey remains can pass through the food chain as the result of ...

  7. Sources of Error in Substance Use Prevalence Surveys

    PubMed Central

    Johnson, Timothy P.

    2014-01-01

    Population-based estimates of substance use patterns have been regularly reported now for several decades. Concerns with the quality of the survey methodologies employed to produce those estimates date back almost as far. Those concerns have led to a considerable body of research specifically focused on understanding the nature and consequences of survey-based errors in substance use epidemiology. This paper reviews and summarizes that empirical research by organizing it within a total survey error model framework that considers multiple types of representation and measurement errors. Gaps in our knowledge of error sources in substance use surveys and areas needing future research are also identified. PMID:27437511

  8. Recommendations for choosing an analysis method that controls Type I error for unbalanced cluster sample designs with Gaussian outcomes.

    PubMed

    Johnson, Jacqueline L; Kreidler, Sarah M; Catellier, Diane J; Murray, David M; Muller, Keith E; Glueck, Deborah H

    2015-11-30

    We used theoretical and simulation-based approaches to study Type I error rates for one-stage and two-stage analytic methods for cluster-randomized designs. The one-stage approach uses the observed data as outcomes and accounts for within-cluster correlation using a general linear mixed model. The two-stage model uses the cluster specific means as the outcomes in a general linear univariate model. We demonstrate analytically that both one-stage and two-stage models achieve exact Type I error rates when cluster sizes are equal. With unbalanced data, an exact size α test does not exist, and Type I error inflation may occur. Via simulation, we compare the Type I error rates for four one-stage and six two-stage hypothesis testing approaches for unbalanced data. With unbalanced data, the two-stage model, weighted by the inverse of the estimated theoretical variance of the cluster means, and with variance constrained to be positive, provided the best Type I error control for studies having at least six clusters per arm. The one-stage model with Kenward-Roger degrees of freedom and unconstrained variance performed well for studies having at least 14 clusters per arm. The popular analytic method of using a one-stage model with denominator degrees of freedom appropriate for balanced data performed poorly for small sample sizes and low intracluster correlation. Because small sample sizes and low intracluster correlation are common features of cluster-randomized trials, the Kenward-Roger method is the preferred one-stage approach. Copyright © 2015 John Wiley & Sons, Ltd.

  9. Human errors and violations in computer and information security: the viewpoint of network administrators and security specialists.

    PubMed

    Kraemer, Sara; Carayon, Pascale

    2007-03-01

    This paper describes human errors and violations of end users and network administration in computer and information security. This information is summarized in a conceptual framework for examining the human and organizational factors contributing to computer and information security. This framework includes human error taxonomies to describe the work conditions that contribute adversely to computer and information security, i.e. to security vulnerabilities and breaches. The issue of human error and violation in computer and information security was explored through a series of 16 interviews with network administrators and security specialists. The interviews were audio taped, transcribed, and analyzed by coding specific themes in a node structure. The result is an expanded framework that classifies types of human error and identifies specific human and organizational factors that contribute to computer and information security. Network administrators tended to view errors created by end users as more intentional than unintentional, while errors created by network administrators as more unintentional than intentional. Organizational factors, such as communication, security culture, policy, and organizational structure, were the most frequently cited factors associated with computer and information security.

  10. Procrustes-based geometric morphometrics on MRI images: An example of inter-operator bias in 3D landmarks and its impact on big datasets.

    PubMed

    Daboul, Amro; Ivanovska, Tatyana; Bülow, Robin; Biffar, Reiner; Cardini, Andrea

    2018-01-01

    Using 3D anatomical landmarks from adult human head MRIs, we assessed the magnitude of inter-operator differences in Procrustes-based geometric morphometric analyses. An in depth analysis of both absolute and relative error was performed in a subsample of individuals with replicated digitization by three different operators. The effect of inter-operator differences was also explored in a large sample of more than 900 individuals. Although absolute error was not unusual for MRI measurements, including bone landmarks, shape was particularly affected by differences among operators, with up to more than 30% of sample variation accounted for by this type of error. The magnitude of the bias was such that it dominated the main pattern of bone and total (all landmarks included) shape variation, largely surpassing the effect of sex differences between hundreds of men and women. In contrast, however, we found higher reproducibility in soft-tissue nasal landmarks, despite relatively larger errors in estimates of nasal size. Our study exemplifies the assessment of measurement error using geometric morphometrics on landmarks from MRIs and stresses the importance of relating it to total sample variance within the specific methodological framework being used. In summary, precise landmarks may not necessarily imply negligible errors, especially in shape data; indeed, size and shape may be differentially impacted by measurement error and different types of landmarks may have relatively larger or smaller errors. Importantly, and consistently with other recent studies using geometric morphometrics on digital images (which, however, were not specific to MRI data), this study showed that inter-operator biases can be a major source of error in the analysis of large samples, as those that are becoming increasingly common in the 'era of big data'.

  11. Procrustes-based geometric morphometrics on MRI images: An example of inter-operator bias in 3D landmarks and its impact on big datasets

    PubMed Central

    Ivanovska, Tatyana; Bülow, Robin; Biffar, Reiner; Cardini, Andrea

    2018-01-01

    Using 3D anatomical landmarks from adult human head MRIs, we assessed the magnitude of inter-operator differences in Procrustes-based geometric morphometric analyses. An in depth analysis of both absolute and relative error was performed in a subsample of individuals with replicated digitization by three different operators. The effect of inter-operator differences was also explored in a large sample of more than 900 individuals. Although absolute error was not unusual for MRI measurements, including bone landmarks, shape was particularly affected by differences among operators, with up to more than 30% of sample variation accounted for by this type of error. The magnitude of the bias was such that it dominated the main pattern of bone and total (all landmarks included) shape variation, largely surpassing the effect of sex differences between hundreds of men and women. In contrast, however, we found higher reproducibility in soft-tissue nasal landmarks, despite relatively larger errors in estimates of nasal size. Our study exemplifies the assessment of measurement error using geometric morphometrics on landmarks from MRIs and stresses the importance of relating it to total sample variance within the specific methodological framework being used. In summary, precise landmarks may not necessarily imply negligible errors, especially in shape data; indeed, size and shape may be differentially impacted by measurement error and different types of landmarks may have relatively larger or smaller errors. Importantly, and consistently with other recent studies using geometric morphometrics on digital images (which, however, were not specific to MRI data), this study showed that inter-operator biases can be a major source of error in the analysis of large samples, as those that are becoming increasingly common in the 'era of big data'. PMID:29787586

  12. Comparison of disagreement and error rates for three types of interdepartmental consultations.

    PubMed

    Renshaw, Andrew A; Gould, Edwin W

    2005-12-01

    Previous studies have documented a relatively high rate of disagreement for interdepartmental consultations, but follow-up is limited. We reviewed the results of 3 types of interdepartmental consultations in our hospital during a 2-year period, including 328 incoming, 928 pathologist-generated outgoing, and 227 patient- or clinician-generated outgoing consults. The disagreement rate was significantly higher for incoming consults (10.7%) than for outgoing pathologist-generated consults (5.9%) (P = .06). Disagreement rates for outgoing patient- or clinician-generated consults were not significantly different from either other type (7.9%). Additional consultation, biopsy, or testing follow-up was available for 19 (54%) of 35, 14 (25%) of 55, and 6 (33%) of 18 incoming, outgoing pathologist-generated, and outgoing patient- or clinician-generated consults with disagreements, respectively; the percentage of errors varied widely (15/19 [79%], 8/14 [57%], and 2/6 [33%], respectively), but differences were not significant (P >.05 for each). Review of the individual errors revealed specific diagnostic areas in which improvement in performance might be made. Disagreement rates for interdepartmental consultation ranged from 5.9% to 10.7%, but only 33% to 79% represented errors. Additional consultation, tissue, and testing results can aid in distinguishing disagreements from errors.

  13. Bayesian analysis of input uncertainty in hydrological modeling: 2. Application

    NASA Astrophysics Data System (ADS)

    Kavetski, Dmitri; Kuczera, George; Franks, Stewart W.

    2006-03-01

    The Bayesian total error analysis (BATEA) methodology directly addresses both input and output errors in hydrological modeling, requiring the modeler to make explicit, rather than implicit, assumptions about the likely extent of data uncertainty. This study considers a BATEA assessment of two North American catchments: (1) French Broad River and (2) Potomac basins. It assesses the performance of the conceptual Variable Infiltration Capacity (VIC) model with and without accounting for input (precipitation) uncertainty. The results show the considerable effects of precipitation errors on the predicted hydrographs (especially the prediction limits) and on the calibrated parameters. In addition, the performance of BATEA in the presence of severe model errors is analyzed. While BATEA allows a very direct treatment of input uncertainty and yields some limited insight into model errors, it requires the specification of valid error models, which are currently poorly understood and require further work. Moreover, it leads to computationally challenging highly dimensional problems. For some types of models, including the VIC implemented using robust numerical methods, the computational cost of BATEA can be reduced using Newton-type methods.

  14. Statistical analysis of modeling error in structural dynamic systems

    NASA Technical Reports Server (NTRS)

    Hasselman, T. K.; Chrostowski, J. D.

    1990-01-01

    The paper presents a generic statistical model of the (total) modeling error for conventional space structures in their launch configuration. Modeling error is defined as the difference between analytical prediction and experimental measurement. It is represented by the differences between predicted and measured real eigenvalues and eigenvectors. Comparisons are made between pre-test and post-test models. Total modeling error is then subdivided into measurement error, experimental error and 'pure' modeling error, and comparisons made between measurement error and total modeling error. The generic statistical model presented in this paper is based on the first four global (primary structure) modes of four different structures belonging to the generic category of Conventional Space Structures (specifically excluding large truss-type space structures). As such, it may be used to evaluate the uncertainty of predicted mode shapes and frequencies, sinusoidal response, or the transient response of other structures belonging to the same generic category.

  15. Verification of image orthorectification techniques for low-cost geometric inspection of masonry arch bridges

    NASA Astrophysics Data System (ADS)

    González-Jorge, Higinio; Riveiro, Belén; Varela, María; Arias, Pedro

    2012-07-01

    A low-cost image orthorectification tool based on the utilization of compact cameras and scale bars is developed to obtain the main geometric parameters of masonry bridges for inventory and routine inspection purposes. The technique is validated in three different bridges by comparison with laser scanning data. The surveying process is very delicate and must make a balance between working distance and angle. Three different cameras are used in the study to establish the relationship between the error and the camera model. Results depict nondependence in error between the length of the bridge element, the type of bridge, and the type of element. Error values for all the cameras are below 4 percent (95 percent of the data). A compact Canon camera, the model with the best technical specifications, shows an error level ranging from 0.5 to 1.5 percent.

  16. Data-driven region-of-interest selection without inflating Type I error rate.

    PubMed

    Brooks, Joseph L; Zoumpoulaki, Alexia; Bowman, Howard

    2017-01-01

    In ERP and other large multidimensional neuroscience data sets, researchers often select regions of interest (ROIs) for analysis. The method of ROI selection can critically affect the conclusions of a study by causing the researcher to miss effects in the data or to detect spurious effects. In practice, to avoid inflating Type I error rate (i.e., false positives), ROIs are often based on a priori hypotheses or independent information. However, this can be insensitive to experiment-specific variations in effect location (e.g., latency shifts) reducing power to detect effects. Data-driven ROI selection, in contrast, is nonindependent and uses the data under analysis to determine ROI positions. Therefore, it has potential to select ROIs based on experiment-specific information and increase power for detecting effects. However, data-driven methods have been criticized because they can substantially inflate Type I error rate. Here, we demonstrate, using simulations of simple ERP experiments, that data-driven ROI selection can indeed be more powerful than a priori hypotheses or independent information. Furthermore, we show that data-driven ROI selection using the aggregate grand average from trials (AGAT), despite being based on the data at hand, can be safely used for ROI selection under many circumstances. However, when there is a noise difference between conditions, using the AGAT can inflate Type I error and should be avoided. We identify critical assumptions for use of the AGAT and provide a basis for researchers to use, and reviewers to assess, data-driven methods of ROI localization in ERP and other studies. © 2016 Society for Psychophysiological Research.

  17. Measuring food intake in studies of obesity.

    PubMed

    Lissner, Lauren

    2002-12-01

    The problem of how to measure habitual food intake in studies of obesity remains an enigma in nutritional research. The existence of obesity-specific underreporting was rather controversial until the advent of the doubly labelled water technique gave credence to previously anecdotal evidence that such a bias does in fact exist. This paper reviews a number of issues relevant to interpreting dietary data in studies involving obesity. Topics covered include: participation biases, normative biases,importance of matching method to study, selective underreporting, and a brief discussion of the potential implications of generalised and selective underreporting in analytical epidemiology. It is concluded that selective underreporting of certain food types by obese individuals would produce consequences in analytical epidemiological studies that are both unpredictable and complex. Since it is becoming increasingly acknowledged that selective reporting error does occur, it is important to emphasise that correction for energy intake is not sufficient to eliminate the biases from this type of error. This is true both for obesity-related selective reporting errors and more universal types of selective underreporting, e.g. foods of low social desirability. Additional research is urgently required to examine the consequences of this type of error.

  18. Financial errors in dementia: Testing a neuroeconomic conceptual framework

    PubMed Central

    Chiong, Winston; Hsu, Ming; Wudka, Danny; Miller, Bruce L.; Rosen, Howard J.

    2013-01-01

    Financial errors by patients with dementia can have devastating personal and family consequences. We developed and evaluated a neuroeconomic conceptual framework for understanding financial errors across different dementia syndromes, using a systematic, retrospective, blinded chart review of demographically-balanced cohorts of patients with Alzheimer’s disease (AD, n=100) and behavioral variant frontotemporal dementia (bvFTD, n=50). Reviewers recorded specific reports of financial errors according to a conceptual framework identifying patient cognitive and affective characteristics, and contextual influences, conferring susceptibility to each error. Specific financial errors were reported for 49% of AD and 70% of bvFTD patients (p = 0.012). AD patients were more likely than bvFTD patients to make amnestic errors (p< 0.001), while bvFTD patients were more likely to spend excessively (p = 0.004) and to exhibit other behaviors consistent with diminished sensitivity to losses and other negative outcomes (p< 0.001). Exploratory factor analysis identified a social/affective vulnerability factor associated with errors in bvFTD, and a cognitive vulnerability factor associated with errors in AD. Our findings highlight the frequency and functional importance of financial errors as symptoms of AD and bvFTD. A conceptual model derived from neuroeconomic literature identifies factors that influence vulnerability to different types of financial error in different dementia syndromes, with implications for early diagnosis and subsequent risk prevention. PMID:23550884

  19. Sensitivity in error detection of patient specific QA tools for IMRT plans

    NASA Astrophysics Data System (ADS)

    Lat, S. Z.; Suriyapee, S.; Sanghangthum, T.

    2016-03-01

    The high complexity of dose calculation in treatment planning and accurate delivery of IMRT plan need high precision of verification method. The purpose of this study is to investigate error detection capability of patient specific QA tools for IMRT plans. The two H&N and two prostate IMRT plans with MapCHECK2 and portal dosimetry QA tools were studied. Measurements were undertaken for original and modified plans with errors introduced. The intentional errors composed of prescribed dose (±2 to ±6%) and position shifting in X-axis and Y-axis (±1 to ±5mm). After measurement, gamma pass between original and modified plans were compared. The average gamma pass for original H&N and prostate plans were 98.3% and 100% for MapCHECK2 and 95.9% and 99.8% for portal dosimetry, respectively. In H&N plan, MapCHECK2 can detect position shift errors starting from 3mm while portal dosimetry can detect errors started from 2mm. Both devices showed similar sensitivity in detection of position shift error in prostate plan. For H&N plan, MapCHECK2 can detect dose errors starting at ±4%, whereas portal dosimetry can detect from ±2%. For prostate plan, both devices can identify dose errors starting from ±4%. Sensitivity of error detection depends on type of errors and plan complexity.

  20. Prepopulated radiology report templates: a prospective analysis of error rate and turnaround time.

    PubMed

    Hawkins, C M; Hall, S; Hardin, J; Salisbury, S; Towbin, A J

    2012-08-01

    Current speech recognition software allows exam-specific standard reports to be prepopulated into the dictation field based on the radiology information system procedure code. While it is thought that prepopulating reports can decrease the time required to dictate a study and the overall number of errors in the final report, this hypothesis has not been studied in a clinical setting. A prospective study was performed. During the first week, radiologists dictated all studies using prepopulated standard reports. During the second week, all studies were dictated after prepopulated reports had been disabled. Final radiology reports were evaluated for 11 different types of errors. Each error within a report was classified individually. The median time required to dictate an exam was compared between the 2 weeks. There were 12,387 reports dictated during the study, of which, 1,173 randomly distributed reports were analyzed for errors. There was no difference in the number of errors per report between the 2 weeks; however, radiologists overwhelmingly preferred using a standard report both weeks. Grammatical errors were by far the most common error type, followed by missense errors and errors of omission. There was no significant difference in the median dictation time when comparing studies performed each week. The use of prepopulated reports does not alone affect the error rate or dictation time of radiology reports. While it is a useful feature for radiologists, it must be coupled with other strategies in order to decrease errors.

  1. Effects of hemisphere speech dominance and seizure focus on patterns of behavioral response errors for three types of stimuli.

    PubMed

    Rausch, R; MacDonald, K

    1997-03-01

    We used a protocol consisting of a continuous presentation of stimuli with associated response requests during an intracarotid sodium amobarbital procedure (IAP) to study the effects of hemisphere injected (speech dominant vs. nondominant) and seizure focus (left temporal lobe vs. right temporal lobe) on the pattern of behavioral response errors for three types of visual stimuli (pictures of common objects, words, and abstract forms). Injection of the left speech dominant hemisphere compared to the right nondominant hemisphere increased overall errors and affected the pattern of behavioral errors. The presence of a seizure focus in the contralateral hemisphere increased overall errors, particularly for the right temporal lobe seizure patients, but did not affect the pattern of behavioral errors. Left hemisphere injections disrupted both naming and reading responses at a rate similar to that of matching-to-sample performance. Also, a short-term memory deficit was observed with all three stimuli. Long-term memory testing following the left hemisphere injection indicated that only for pictures of common objects were there fewer errors during the early postinjection period than for the later long-term memory testing. Therefore, despite the inability to respond to picture stimuli, picture items, but not words or forms, could be sufficiently encoded for later recall. In contrast, right hemisphere injections resulted in few errors, with a pattern suggesting a mild general cognitive decrease. A selective weakness in learning unfamiliar forms was found. Our findings indicate that different patterns of behavioral deficits occur following the left vs. right hemisphere injections, with selective patterns specific to stimulus type.

  2. Impact of specific language impairment and type of school on different language subsystems.

    PubMed

    Puglisi, Marina Leite; Befi-Lopes, Debora Maria

    2016-01-01

    This study aimed to explore quantitative and qualitative effects of type of school and specific language impairment (SLI) on different language abilities. 204 Brazilian children aged from 4 to 6 years old participated in the study. Children were selected to form three groups: 1) 63 typically developing children studying in private schools (TDPri); 2) 102 typically developing children studying in state schools (TDSta); and 39 children with SLI studying in state schools (SLISta). All individuals were assessed regarding expressive vocabulary, number morphology and morphosyntactic comprehension. All language subsystems were vulnerable to both environmental (type of school) and biological (SLI) effects. The relationship between the three language measures was exactly the same to all groups: vocabulary growth correlated with age and with the development of morphological abilities and morphosyntactic comprehension. Children with SLI showed atypical errors in the comprehension test at the age of 4, but presented a pattern of errors that gradually resembled typical development. The effect of type of school was marked by quantitative differences, while the effect of SLI was characterised by both quantitative and qualitative differences.

  3. Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan

    2013-01-01

    The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.

  4. Frequency and Type of Situational Awareness Errors Contributing to Death and Brain Damage: A Closed Claims Analysis.

    PubMed

    Schulz, Christian M; Burden, Amanda; Posner, Karen L; Mincer, Shawn L; Steadman, Randolph; Wagner, Klaus J; Domino, Karen B

    2017-08-01

    Situational awareness errors may play an important role in the genesis of patient harm. The authors examined closed anesthesia malpractice claims for death or brain damage to determine the frequency and type of situational awareness errors. Surgical and procedural anesthesia death and brain damage claims in the Anesthesia Closed Claims Project database were analyzed. Situational awareness error was defined as failure to perceive relevant clinical information, failure to comprehend the meaning of available information, or failure to project, anticipate, or plan. Patient and case characteristics, primary damaging events, and anesthesia payments in claims with situational awareness errors were compared to other death and brain damage claims from 2002 to 2013. Anesthesiologist situational awareness errors contributed to death or brain damage in 198 of 266 claims (74%). Respiratory system damaging events were more common in claims with situational awareness errors (56%) than other claims (21%, P < 0.001). The most common specific respiratory events in error claims were inadequate oxygenation or ventilation (24%), difficult intubation (11%), and aspiration (10%). Payments were made in 85% of situational awareness error claims compared to 46% in other claims (P = 0.001), with no significant difference in payment size. Among 198 claims with anesthesia situational awareness error, perception errors were most common (42%), whereas comprehension errors (29%) and projection errors (29%) were relatively less common. Situational awareness error definitions were operationalized for reliable application to real-world anesthesia cases. Situational awareness errors may have contributed to catastrophic outcomes in three quarters of recent anesthesia malpractice claims.Situational awareness errors resulting in death or brain damage remain prevalent causes of malpractice claims in the 21st century.

  5. Paediatric in-patient prescribing errors in Malaysia: a cross-sectional multicentre study.

    PubMed

    Khoo, Teik Beng; Tan, Jing Wen; Ng, Hoong Phak; Choo, Chong Ming; Bt Abdul Shukor, Intan Nor Chahaya; Teh, Siao Hean

    2017-06-01

    Background There is a lack of large comprehensive studies in developing countries on paediatric in-patient prescribing errors in different settings. Objectives To determine the characteristics of in-patient prescribing errors among paediatric patients. Setting General paediatric wards, neonatal intensive care units and paediatric intensive care units in government hospitals in Malaysia. Methods This is a cross-sectional multicentre study involving 17 participating hospitals. Drug charts were reviewed in each ward to identify the prescribing errors. All prescribing errors identified were further assessed for their potential clinical consequences, likely causes and contributing factors. Main outcome measures Incidence, types, potential clinical consequences, causes and contributing factors of the prescribing errors. Results The overall prescribing error rate was 9.2% out of 17,889 prescribed medications. There was no significant difference in the prescribing error rates between different types of hospitals or wards. The use of electronic prescribing had a higher prescribing error rate than manual prescribing (16.9 vs 8.2%, p < 0.05). Twenty eight (1.7%) prescribing errors were deemed to have serious potential clinical consequences and 2 (0.1%) were judged to be potentially fatal. Most of the errors were attributed to human factors, i.e. performance or knowledge deficit. The most common contributing factors were due to lack of supervision or of knowledge. Conclusions Although electronic prescribing may potentially improve safety, it may conversely cause prescribing errors due to suboptimal interfaces and cumbersome work processes. Junior doctors need specific training in paediatric prescribing and close supervision to reduce prescribing errors in paediatric in-patients.

  6. Reducing medication errors in critical care: a multimodal approach

    PubMed Central

    Kruer, Rachel M; Jarrell, Andrew S; Latif, Asad

    2014-01-01

    The Institute of Medicine has reported that medication errors are the single most common type of error in health care, representing 19% of all adverse events, while accounting for over 7,000 deaths annually. The frequency of medication errors in adult intensive care units can be as high as 947 per 1,000 patient-days, with a median of 105.9 per 1,000 patient-days. The formulation of drugs is a potential contributor to medication errors. Challenges related to drug formulation are specific to the various routes of medication administration, though errors associated with medication appearance and labeling occur among all drug formulations and routes of administration. Addressing these multifaceted challenges requires a multimodal approach. Changes in technology, training, systems, and safety culture are all strategies to potentially reduce medication errors related to drug formulation in the intensive care unit. PMID:25210478

  7. Operational Interventions to Maintenance Error

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Dulchinos, VIcki

    1997-01-01

    A significant proportion of aviation accidents and incidents are known to be tied to human error. However, research of flight operational errors has shown that so-called pilot error often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the team7 concept for maintenance operations and in tailoring programs to fit the needs of technical opeRAtions. Nevertheless, there remains a dual challenge: 1) to develop human factors interventions which are directly supported by reliable human error data, and 2) to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  8. Reduction of Maintenance Error Through Focused Interventions

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Rosekind, Mark R. (Technical Monitor)

    1997-01-01

    It is well known that a significant proportion of aviation accidents and incidents are tied to human error. In flight operations, research of operational errors has shown that so-called "pilot error" often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the "team" concept for maintenance operations and in tailoring programs to fit the needs of technical operations. Nevertheless, there remains a dual challenge: to develop human factors interventions which are directly supported by reliable human error data, and to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  9. Using SAS PROC CALIS to fit Level-1 error covariance structures of latent growth models.

    PubMed

    Ding, Cherng G; Jane, Ten-Der

    2012-09-01

    In the present article, we demonstrates the use of SAS PROC CALIS to fit various types of Level-1 error covariance structures of latent growth models (LGM). Advantages of the SEM approach, on which PROC CALIS is based, include the capabilities of modeling the change over time for latent constructs, measured by multiple indicators; embedding LGM into a larger latent variable model; incorporating measurement models for latent predictors; and better assessing model fit and the flexibility in specifying error covariance structures. The strength of PROC CALIS is always accompanied with technical coding work, which needs to be specifically addressed. We provide a tutorial on the SAS syntax for modeling the growth of a manifest variable and the growth of a latent construct, focusing the documentation on the specification of Level-1 error covariance structures. Illustrations are conducted with the data generated from two given latent growth models. The coding provided is helpful when the growth model has been well determined and the Level-1 error covariance structure is to be identified.

  10. Impact of automated dispensing cabinets on medication selection and preparation error rates in an emergency department: a prospective and direct observational before-and-after study.

    PubMed

    Fanning, Laura; Jones, Nick; Manias, Elizabeth

    2016-04-01

    The implementation of automated dispensing cabinets (ADCs) in healthcare facilities appears to be increasing, in particular within Australian hospital emergency departments (EDs). While the investment in ADCs is on the increase, no studies have specifically investigated the impacts of ADCs on medication selection and preparation error rates in EDs. Our aim was to assess the impact of ADCs on medication selection and preparation error rates in an ED of a tertiary teaching hospital. Pre intervention and post intervention study involving direct observations of nurses completing medication selection and preparation activities before and after the implementation of ADCs in the original and new emergency departments within a 377-bed tertiary teaching hospital in Australia. Medication selection and preparation error rates were calculated and compared between these two periods. Secondary end points included the impact on medication error type and severity. A total of 2087 medication selection and preparations were observed among 808 patients pre and post intervention. Implementation of ADCs in the new ED resulted in a 64.7% (1.96% versus 0.69%, respectively, P = 0.017) reduction in medication selection and preparation errors. All medication error types were reduced in the post intervention study period. There was an insignificant impact on medication error severity as all errors detected were categorised as minor. The implementation of ADCs could reduce medication selection and preparation errors and improve medication safety in an ED setting. © 2015 John Wiley & Sons, Ltd.

  11. Maximum type I error rate inflation from sample size reassessment when investigators are blind to treatment labels.

    PubMed

    Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic

    2016-05-30

    Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  12. Evidence-based anatomical review areas derived from systematic analysis of cases from a radiological departmental discrepancy meeting.

    PubMed

    Chin, S C; Weir-McCall, J R; Yeap, P M; White, R D; Budak, M J; Duncan, G; Oliver, T B; Zealley, I A

    2017-10-01

    To produce short checklists of specific anatomical review sites for different regions of the body based on the frequency of radiological errors reviewed at radiology discrepancy meetings, thereby creating "evidence-based" review areas for radiology reporting. A single centre discrepancy database was retrospectively reviewed from a 5-year period. All errors were classified by type, modality, body system, and specific anatomical location. Errors were assigned to one of four body regions: chest, abdominopelvic, central nervous system (CNS), and musculoskeletal (MSK). Frequencies of errors in anatomical locations were then analysed. There were 561 errors in 477 examinations; 290 (46%) errors occurred in the abdomen/pelvis, 99 (15.7%) in the chest, 117 (18.5%) in the CNS, and 125 (19.9%) in the MSK system. In each body system, the five most common location were chest: lung bases on computed tomography (CT), apices on radiography, pulmonary vasculature, bones, and mediastinum; abdominopelvic: vasculature, colon, kidneys, liver, and pancreas; CNS: intracranial vasculature, peripheral cerebral grey matter, bone, parafalcine, and the frontotemporal lobes surrounding the Sylvian fissure; and MSK: calvarium, sacrum, pelvis, chest, and spine. The five listed locations accounted for >50% of all perceptual errors suggesting an avenue for focused review at the end of reporting. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  13. Impeding Xist expression from the active X chromosome improves mouse somatic cell nuclear transfer.

    PubMed

    Inoue, Kimiko; Kohda, Takashi; Sugimoto, Michihiko; Sado, Takashi; Ogonuki, Narumi; Matoba, Shogo; Shiura, Hirosuke; Ikeda, Rieko; Mochida, Keiji; Fujii, Takashi; Sawai, Ken; Otte, Arie P; Tian, X Cindy; Yang, Xiangzhong; Ishino, Fumitoshi; Abe, Kuniya; Ogura, Atsuo

    2010-10-22

    Cloning mammals by means of somatic cell nuclear transfer (SCNT) is highly inefficient because of erroneous reprogramming of the donor genome. Reprogramming errors appear to arise randomly, but the nature of nonrandom, SCNT-specific errors remains elusive. We found that Xist, a noncoding RNA that inactivates one of the two X chromosomes in females, was ectopically expressed from the active X (Xa) chromosome in cloned mouse embryos of both sexes. Deletion of Xist on Xa showed normal global gene expression and resulted in about an eight- to ninefold increase in cloning efficiency. We also identified an Xist-independent mechanism that specifically down-regulated a subset of X-linked genes through somatic-type repressive histone blocks. Thus, we have identified nonrandom reprogramming errors in mouse cloning that can be altered to improve the efficiency of SCNT methods.

  14. Improved Statistics for Genome-Wide Interaction Analysis

    PubMed Central

    Ueki, Masao; Cordell, Heather J.

    2012-01-01

    Recently, Wu and colleagues [1] proposed two novel statistics for genome-wide interaction analysis using case/control or case-only data. In computer simulations, their proposed case/control statistic outperformed competing approaches, including the fast-epistasis option in PLINK and logistic regression analysis under the correct model; however, reasons for its superior performance were not fully explored. Here we investigate the theoretical properties and performance of Wu et al.'s proposed statistics and explain why, in some circumstances, they outperform competing approaches. Unfortunately, we find minor errors in the formulae for their statistics, resulting in tests that have higher than nominal type 1 error. We also find minor errors in PLINK's fast-epistasis and case-only statistics, although theory and simulations suggest that these errors have only negligible effect on type 1 error. We propose adjusted versions of all four statistics that, both theoretically and in computer simulations, maintain correct type 1 error rates under the null hypothesis. We also investigate statistics based on correlation coefficients that maintain similar control of type 1 error. Although designed to test specifically for interaction, we show that some of these previously-proposed statistics can, in fact, be sensitive to main effects at one or both loci, particularly in the presence of linkage disequilibrium. We propose two new “joint effects” statistics that, provided the disease is rare, are sensitive only to genuine interaction effects. In computer simulations we find, in most situations considered, that highest power is achieved by analysis under the correct genetic model. Such an analysis is unachievable in practice, as we do not know this model. However, generally high power over a wide range of scenarios is exhibited by our joint effects and adjusted Wu statistics. We recommend use of these alternative or adjusted statistics and urge caution when using Wu et al.'s originally-proposed statistics, on account of the inflated error rate that can result. PMID:22496670

  15. Reflection of medical error highlighted on media in Turkey: A retrospective study

    PubMed Central

    Isik, Oguz; Bayin, Gamze; Ugurluoglu, Ozgur

    2016-01-01

    Objective: This study was performed with the aim of identifying how news on medical errors have be transmitted, and how the types, reasons, and conclusions of medical errors have been reflected to by the media in Turkey. Methods: A content analysis method was used in the study, and in this context, the data for the study was acquired by scanning five newspapers with the top editions on the national basis between the years 2012 and 2015 for the news about medical errors. Some specific selection criteria was used for the scanning of resulted news, and 116 news items acquired as a result of all the eliminations. Results: According to the results of the study; the vast majority of medical errors (40.5%) transmitted by the news resulted from the negligence of the medical staff. The medical errors were caused by physicians in the ratio of 74.1%, they most commonly occurred in state hospitals (31.9%). Another important result of the research was that medical errors resulted in either patient death to a large extent (51.7%), or permanent damage and disability to patients (25.0%). Conclusion: The news concerning medical errors provided information about the types, causes, and the results of these medical errors. It also reflected the media point of view on the issue. The examination of the content of the medical errors reported by the media were important which calls for appropriate interventions to avoid and minimize the occurrence of medical errors by improving the healthcare delivery system. PMID:27882026

  16. Updating expected action outcome in the medial frontal cortex involves an evaluation of error type.

    PubMed

    Maier, Martin E; Steinhauser, Marco

    2013-10-02

    Forming expectations about the outcome of an action is an important prerequisite for action control and reinforcement learning in the human brain. The medial frontal cortex (MFC) has been shown to play an important role in the representation of outcome expectations, particularly when an update of expected outcome becomes necessary because an error is detected. However, error detection alone is not always sufficient to compute expected outcome because errors can occur in various ways and different types of errors may be associated with different outcomes. In the present study, we therefore investigate whether updating expected outcome in the human MFC is based on an evaluation of error type. Our approach was to consider an electrophysiological correlate of MFC activity on errors, the error-related negativity (Ne/ERN), in a task in which two types of errors could occur. Because the two error types were associated with different amounts of monetary loss, updating expected outcomes on error trials required an evaluation of error type. Our data revealed a pattern of Ne/ERN amplitudes that closely mirrored the amount of monetary loss associated with each error type, suggesting that outcome expectations are updated based on an evaluation of error type. We propose that this is achieved by a proactive evaluation process that anticipates error types by continuously monitoring error sources or by dynamically representing possible response-outcome relations.

  17. Assessing the accuracy of the International Classification of Diseases codes to identify abusive head trauma: a feasibility study.

    PubMed

    Berger, Rachel P; Parks, Sharyn; Fromkin, Janet; Rubin, Pamela; Pecora, Peter J

    2015-04-01

    To assess the accuracy of an International Classification of Diseases (ICD) code-based operational case definition for abusive head trauma (AHT). Subjects were children <5 years of age evaluated for AHT by a hospital-based Child Protection Team (CPT) at a tertiary care paediatric hospital with a completely electronic medical record (EMR) system. Subjects were designated as non-AHT traumatic brain injury (TBI) or AHT based on whether the CPT determined that the injuries were due to AHT. The sensitivity and specificity of the ICD-based definition were calculated. There were 223 children evaluated for AHT: 117 AHT and 106 non-AHT TBI. The sensitivity and specificity of the ICD-based operational case definition were 92% (95% CI 85.8 to 96.2) and 96% (95% CI 92.3 to 99.7), respectively. All errors in sensitivity and three of the four specificity errors were due to coder error; one specificity error was a physician error. In a paediatric tertiary care hospital with an EMR system, the accuracy of an ICD-based case definition for AHT was high. Additional studies are needed to assess the accuracy of this definition in all types of hospitals in which children with AHT are cared for. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  18. Mindful Reading: Mindfulness Meditation Helps Keep Readers with Dyslexia and ADHD on the Lexical Track.

    PubMed

    Tarrasch, Ricardo; Berman, Zohar; Friedmann, Naama

    2016-01-01

    This study explored the effects of a Mindfulness-Based Stress Reduction (MBSR) intervention on reading, attention, and psychological well-being among people with developmental dyslexia and/or attention deficits. Various types of dyslexia exist, characterized by different error types. We examined a question that has not been tested so far: which types of errors (and dyslexias) are affected by MBSR training. To do so, we tested, using an extensive battery of reading tests, whether each participant had dyslexia, and which errors types s/he makes, and then compared the rate of each error type before and after the MBSR workshop. We used a similar approach to attention disorders: we evaluated the participants' sustained, selective, executive, and orienting of attention to assess whether they had attention-disorders, and if so, which functions were impaired. We then evaluated the effect of MBSR on each of the attention functions. Psychological measures including mindfulness, stress, reflection and rumination, lifesatisfaction, depression, anxiety, and sleep-disturbances were also evaluated. Nineteen Hebrew-readers completed a 2-month mindfulness workshop. The results showed that whereas reading errors of letter-migrations within and between words and vowelletter errors did not decrease following the workshop, most participants made fewer reading errors in general following the workshop, with a significant reduction of 19% from their original number of errors. This decrease mainly resulted from a decrease in errors that occur due to reading via the sublexical rather than the lexical route. It seems, therefore, that mindfulness helped reading by keeping the readers on the lexical route. This improvement in reading probably resulted from improved sustained attention: the reduction in sublexical reading was significant for the dyslexic participants who also had attention deficits, and there were significant correlations between reduced reading errors and decreases in impulsivity. Following the meditation workshop, the rate of commission errors decreased, indicating decreased impulsivity, and the variation in RTs in the CPT task decreased, indicating improved sustained attention. Significant improvements were obtained in participants' mindfulness, perceived-stress, rumination, depression, state-anxiety, and sleep-disturbances. Correlations were also obtained between reading improvement and increased mindfulness following the workshop. Thus, whereas mindfulness training did not affect specific types of errors and did not improve dyslexia, it did affect the reading of adults with developmental dyslexia and ADHD, by helping them to stay on the straight path of the lexical route while reading. Thus, the reading improvement induced by mindfulness sheds light on the intricate relation between attention and reading. Mindfulness reduced impulsivity and improved sustained attention, and this, in turn, improved reading of adults with developmental dyslexia and ADHD, by helping them to read via the straight path of the lexical route.

  19. Mindful Reading: Mindfulness Meditation Helps Keep Readers with Dyslexia and ADHD on the Lexical Track

    PubMed Central

    Tarrasch, Ricardo; Berman, Zohar; Friedmann, Naama

    2016-01-01

    This study explored the effects of a Mindfulness-Based Stress Reduction (MBSR) intervention on reading, attention, and psychological well-being among people with developmental dyslexia and/or attention deficits. Various types of dyslexia exist, characterized by different error types. We examined a question that has not been tested so far: which types of errors (and dyslexias) are affected by MBSR training. To do so, we tested, using an extensive battery of reading tests, whether each participant had dyslexia, and which errors types s/he makes, and then compared the rate of each error type before and after the MBSR workshop. We used a similar approach to attention disorders: we evaluated the participants’ sustained, selective, executive, and orienting of attention to assess whether they had attention-disorders, and if so, which functions were impaired. We then evaluated the effect of MBSR on each of the attention functions. Psychological measures including mindfulness, stress, reflection and rumination, lifesatisfaction, depression, anxiety, and sleep-disturbances were also evaluated. Nineteen Hebrew-readers completed a 2-month mindfulness workshop. The results showed that whereas reading errors of letter-migrations within and between words and vowelletter errors did not decrease following the workshop, most participants made fewer reading errors in general following the workshop, with a significant reduction of 19% from their original number of errors. This decrease mainly resulted from a decrease in errors that occur due to reading via the sublexical rather than the lexical route. It seems, therefore, that mindfulness helped reading by keeping the readers on the lexical route. This improvement in reading probably resulted from improved sustained attention: the reduction in sublexical reading was significant for the dyslexic participants who also had attention deficits, and there were significant correlations between reduced reading errors and decreases in impulsivity. Following the meditation workshop, the rate of commission errors decreased, indicating decreased impulsivity, and the variation in RTs in the CPT task decreased, indicating improved sustained attention. Significant improvements were obtained in participants’ mindfulness, perceived-stress, rumination, depression, state-anxiety, and sleep-disturbances. Correlations were also obtained between reading improvement and increased mindfulness following the workshop. Thus, whereas mindfulness training did not affect specific types of errors and did not improve dyslexia, it did affect the reading of adults with developmental dyslexia and ADHD, by helping them to stay on the straight path of the lexical route while reading. Thus, the reading improvement induced by mindfulness sheds light on the intricate relation between attention and reading. Mindfulness reduced impulsivity and improved sustained attention, and this, in turn, improved reading of adults with developmental dyslexia and ADHD, by helping them to read via the straight path of the lexical route. PMID:27242565

  20. Characterization of Used Nuclear Fuel with Multivariate Analysis for Process Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dayman, Kenneth J.; Coble, Jamie B.; Orton, Christopher R.

    2014-01-01

    The Multi-Isotope Process (MIP) Monitor combines gamma spectroscopy and multivariate analysis to detect anomalies in various process streams in a nuclear fuel reprocessing system. Measured spectra are compared to models of nominal behavior at each measurement location to detect unexpected changes in system behavior. In order to improve the accuracy and specificity of process monitoring, fuel characterization may be used to more accurately train subsequent models in a full analysis scheme. This paper presents initial development of a reactor-type classifier that is used to select a reactor-specific partial least squares model to predict fuel burnup. Nuclide activities for prototypic usedmore » fuel samples were generated in ORIGEN-ARP and used to investigate techniques to characterize used nuclear fuel in terms of reactor type (pressurized or boiling water reactor) and burnup. A variety of reactor type classification algorithms, including k-nearest neighbors, linear and quadratic discriminant analyses, and support vector machines, were evaluated to differentiate used fuel from pressurized and boiling water reactors. Then, reactor type-specific partial least squares models were developed to predict the burnup of the fuel. Using these reactor type-specific models instead of a model trained for all light water reactors improved the accuracy of burnup predictions. The developed classification and prediction models were combined and applied to a large dataset that included eight fuel assembly designs, two of which were not used in training the models, and spanned the range of the initial 235U enrichment, cooling time, and burnup values expected of future commercial used fuel for reprocessing. Error rates were consistent across the range of considered enrichment, cooling time, and burnup values. Average absolute relative errors in burnup predictions for validation data both within and outside the training space were 0.0574% and 0.0597%, respectively. The errors seen in this work are artificially low, because the models were trained, optimized, and tested on simulated, noise-free data. However, these results indicate that the developed models may generalize well to new data and that the proposed approach constitutes a viable first step in developing a fuel characterization algorithm based on gamma spectra.« less

  1. Understanding reliance on automation: effects of error type, error distribution, age and experience

    PubMed Central

    Sanchez, Julian; Rogers, Wendy A.; Fisk, Arthur D.; Rovira, Ericka

    2015-01-01

    An obstacle detection task supported by “imperfect” automation was used with the goal of understanding the effects of automation error types and age on automation reliance. Sixty younger and sixty older adults interacted with a multi-task simulation of an agricultural vehicle (i.e. a virtual harvesting combine). The simulator included an obstacle detection task and a fully manual tracking task. A micro-level analysis provided insight into the way reliance patterns change over time. The results indicated that there are distinct patterns of reliance that develop as a function of error type. A prevalence of automation false alarms led participants to under-rely on the automation during alarm states while over relying on it during non-alarms states. Conversely, a prevalence of automation misses led participants to over-rely on automated alarms and under-rely on the automation during non-alarm states. Older adults adjusted their behavior according to the characteristics of the automation similarly to younger adults, although it took them longer to do so. The results of this study suggest the relationship between automation reliability and reliance depends on the prevalence of specific errors and on the state of the system. Understanding the effects of automation detection criterion settings on human-automation interaction can help designers of automated systems make predictions about human behavior and system performance as a function of the characteristics of the automation. PMID:25642142

  2. Understanding reliance on automation: effects of error type, error distribution, age and experience.

    PubMed

    Sanchez, Julian; Rogers, Wendy A; Fisk, Arthur D; Rovira, Ericka

    2014-03-01

    An obstacle detection task supported by "imperfect" automation was used with the goal of understanding the effects of automation error types and age on automation reliance. Sixty younger and sixty older adults interacted with a multi-task simulation of an agricultural vehicle (i.e. a virtual harvesting combine). The simulator included an obstacle detection task and a fully manual tracking task. A micro-level analysis provided insight into the way reliance patterns change over time. The results indicated that there are distinct patterns of reliance that develop as a function of error type. A prevalence of automation false alarms led participants to under-rely on the automation during alarm states while over relying on it during non-alarms states. Conversely, a prevalence of automation misses led participants to over-rely on automated alarms and under-rely on the automation during non-alarm states. Older adults adjusted their behavior according to the characteristics of the automation similarly to younger adults, although it took them longer to do so. The results of this study suggest the relationship between automation reliability and reliance depends on the prevalence of specific errors and on the state of the system. Understanding the effects of automation detection criterion settings on human-automation interaction can help designers of automated systems make predictions about human behavior and system performance as a function of the characteristics of the automation.

  3. Dopamine prediction error responses integrate subjective value from different reward dimensions

    PubMed Central

    Lak, Armin; Stauffer, William R.; Schultz, Wolfram

    2014-01-01

    Prediction error signals enable us to learn through experience. These experiences include economic choices between different rewards that vary along multiple dimensions. Therefore, an ideal way to reinforce economic choice is to encode a prediction error that reflects the subjective value integrated across these reward dimensions. Previous studies demonstrated that dopamine prediction error responses reflect the value of singular reward attributes that include magnitude, probability, and delay. Obviously, preferences between rewards that vary along one dimension are completely determined by the manipulated variable. However, it is unknown whether dopamine prediction error responses reflect the subjective value integrated from different reward dimensions. Here, we measured the preferences between rewards that varied along multiple dimensions, and as such could not be ranked according to objective metrics. Monkeys chose between rewards that differed in amount, risk, and type. Because their choices were complete and transitive, the monkeys chose “as if” they integrated different rewards and attributes into a common scale of value. The prediction error responses of single dopamine neurons reflected the integrated subjective value inferred from the choices, rather than the singular reward attributes. Specifically, amount, risk, and reward type modulated dopamine responses exactly to the extent that they influenced economic choices, even when rewards were vastly different, such as liquid and food. This prediction error response could provide a direct updating signal for economic values. PMID:24453218

  4. The Treatment of Capital Costs in Educational Projects

    ERIC Educational Resources Information Center

    Bezeau, Lawrence

    1975-01-01

    Failure to account for the cost and depreciation of capital leads to suboptimal investments in education, specifically to excessively capital intensive instructional technologies. This type of error, which is particularly serious when planning for developing countries, can be easily avoided. (Author)

  5. Denoising DNA deep sequencing data—high-throughput sequencing errors and their correction

    PubMed Central

    Laehnemann, David; Borkhardt, Arndt

    2016-01-01

    Characterizing the errors generated by common high-throughput sequencing platforms and telling true genetic variation from technical artefacts are two interdependent steps, essential to many analyses such as single nucleotide variant calling, haplotype inference, sequence assembly and evolutionary studies. Both random and systematic errors can show a specific occurrence profile for each of the six prominent sequencing platforms surveyed here: 454 pyrosequencing, Complete Genomics DNA nanoball sequencing, Illumina sequencing by synthesis, Ion Torrent semiconductor sequencing, Pacific Biosciences single-molecule real-time sequencing and Oxford Nanopore sequencing. There is a large variety of programs available for error removal in sequencing read data, which differ in the error models and statistical techniques they use, the features of the data they analyse, the parameters they determine from them and the data structures and algorithms they use. We highlight the assumptions they make and for which data types these hold, providing guidance which tools to consider for benchmarking with regard to the data properties. While no benchmarking results are included here, such specific benchmarks would greatly inform tool choices and future software development. The development of stand-alone error correctors, as well as single nucleotide variant and haplotype callers, could also benefit from using more of the knowledge about error profiles and from (re)combining ideas from the existing approaches presented here. PMID:26026159

  6. At the cross-roads: an on-road examination of driving errors at intersections.

    PubMed

    Young, Kristie L; Salmon, Paul M; Lenné, Michael G

    2013-09-01

    A significant proportion of road trauma occurs at intersections. Understanding the nature of driving errors at intersections therefore has the potential to lead to significant injury reductions. To further understand how the complexity of modern intersections shapes behaviour of these errors are compared to errors made mid-block, and the role of wider systems failures in intersection error causation is investigated in an on-road study. Twenty-five participants drove a pre-determined urban route incorporating 25 intersections. Two in-vehicle observers recorded the errors made while a range of other data was collected, including driver verbal protocols, video, driver eye glance behaviour and vehicle data (e.g., speed, braking and lane position). Participants also completed a post-trial cognitive task analysis interview. Participants were found to make 39 specific error types, with speeding violations the most common. Participants made significantly more errors at intersections compared to mid-block, with misjudgement, action and perceptual/observation errors more commonly observed at intersections. Traffic signal configuration was found to play a key role in intersection error causation, with drivers making more errors at partially signalised compared to fully signalised intersections. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Error Patterns with Fraction Calculations at Fourth Grade as a Function of Students' Mathematics Achievement Status.

    PubMed

    Schumacher, Robin F; Malone, Amelia S

    2017-09-01

    The goal of the present study was to describe fraction-calculation errors among 4 th -grade students and determine whether error patterns differed as a function of problem type (addition vs. subtraction; like vs. unlike denominators), orientation (horizontal vs. vertical), or mathematics-achievement status (low- vs. average- vs. high-achieving). We specifically addressed whether mathematics-achievement status was related to students' tendency to operate with whole number bias. We extended this focus by comparing low-performing students' errors in two instructional settings that focused on two different types of fraction understandings: core instruction that focused on part-whole understanding vs. small-group tutoring that focused on magnitude understanding. Results showed students across the sample were more likely to operate with whole number bias on problems with unlike denominators. Students with low or average achievement (who only participated in core instruction) were more likely to operate with whole number bias than students with low achievement who participated in small-group tutoring. We suggest instruction should emphasize magnitude understanding to sufficiently increase fraction understanding for all students in the upper elementary grades.

  8. An agent-based model of dialect evolution in killer whales.

    PubMed

    Filatova, Olga A; Miller, Patrick J O

    2015-05-21

    The killer whale is one of the few animal species with vocal dialects that arise from socially learned group-specific call repertoires. We describe a new agent-based model of killer whale populations and test a set of vocal-learning rules to assess which mechanisms may lead to the formation of dialect groupings observed in the wild. We tested a null model with genetic transmission and no learning, and ten models with learning rules that differ by template source (mother or matriline), variation type (random errors or innovations) and type of call change (no divergence from kin vs. divergence from kin). The null model without vocal learning did not produce the pattern of group-specific call repertoires we observe in nature. Learning from either mother alone or the entire matriline with calls changing by random errors produced a graded distribution of the call phenotype, without the discrete call types observed in nature. Introducing occasional innovation or random error proportional to matriline variance yielded more or less discrete and stable call types. A tendency to diverge from the calls of related matrilines provided fast divergence of loose call clusters. A pattern resembling the dialect diversity observed in the wild arose only when rules were applied in combinations and similar outputs could arise from different learning rules and their combinations. Our results emphasize the lack of information on quantitative features of wild killer whale dialects and reveal a set of testable questions that can draw insights into the cultural evolution of killer whale dialects. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Impact of exposure measurement error in air pollution epidemiology: effect of error type in time-series studies.

    PubMed

    Goldman, Gretchen T; Mulholland, James A; Russell, Armistead G; Strickland, Matthew J; Klein, Mitchel; Waller, Lance A; Tolbert, Paige E

    2011-06-22

    Two distinctly different types of measurement error are Berkson and classical. Impacts of measurement error in epidemiologic studies of ambient air pollution are expected to depend on error type. We characterize measurement error due to instrument imprecision and spatial variability as multiplicative (i.e. additive on the log scale) and model it over a range of error types to assess impacts on risk ratio estimates both on a per measurement unit basis and on a per interquartile range (IQR) basis in a time-series study in Atlanta. Daily measures of twelve ambient air pollutants were analyzed: NO2, NOx, O3, SO2, CO, PM10 mass, PM2.5 mass, and PM2.5 components sulfate, nitrate, ammonium, elemental carbon and organic carbon. Semivariogram analysis was applied to assess spatial variability. Error due to this spatial variability was added to a reference pollutant time-series on the log scale using Monte Carlo simulations. Each of these time-series was exponentiated and introduced to a Poisson generalized linear model of cardiovascular disease emergency department visits. Measurement error resulted in reduced statistical significance for the risk ratio estimates for all amounts (corresponding to different pollutants) and types of error. When modelled as classical-type error, risk ratios were attenuated, particularly for primary air pollutants, with average attenuation in risk ratios on a per unit of measurement basis ranging from 18% to 92% and on an IQR basis ranging from 18% to 86%. When modelled as Berkson-type error, risk ratios per unit of measurement were biased away from the null hypothesis by 2% to 31%, whereas risk ratios per IQR were attenuated (i.e. biased toward the null) by 5% to 34%. For CO modelled error amount, a range of error types were simulated and effects on risk ratio bias and significance were observed. For multiplicative error, both the amount and type of measurement error impact health effect estimates in air pollution epidemiology. By modelling instrument imprecision and spatial variability as different error types, we estimate direction and magnitude of the effects of error over a range of error types.

  10. Category specific dysnomia after thalamic infarction: a case-control study.

    PubMed

    Levin, Netta; Ben-Hur, Tamir; Biran, Iftah; Wertman, Eli

    2005-01-01

    Category specific naming impairment was described mainly after cortical lesions. It is thought to result from a lesion in a specific network, reflecting the organization of our semantic knowledge. The deficit usually involves multiple semantic categories whose profile of naming deficit generally obeys the animate/inanimate dichotomy. Thalamic lesions cause general semantic naming deficit, and only rarely a category specific semantic deficit for very limited and highly specific categories. We performed a case-control study on a 56-year-old right-handed man who presented with language impairment following a left anterior thalamic infarction. His naming ability and semantic knowledge were evaluated in the visual, tactile and auditory modalities for stimuli from 11 different categories, and compared to that of five controls. In naming to visual stimuli the patient performed poorly (error rate>50%) in four categories: vegetables, toys, animals and body parts (average 70.31+/-15%). In each category there was a different dominating error type. He performed better in the other seven categories (tools, clothes, transportation, fruits, electric, furniture, kitchen utensils), averaging 14.28+/-9% errors. Further analysis revealed a dichotomy between naming in animate and inanimate categories in the visual and tactile modalities but not in response to auditory stimuli. Thus, a unique category specific profile of response and naming errors to visual and tactile, but not auditory stimuli was found after a left anterior thalamic infarction. This might reflect the role of the thalamus not only as a relay station but further as a central integrator of different stages of perceptual and semantic processing.

  11. Specificity of Atmospheric Correction of Satellite Data on Ocean Color in the Far East

    NASA Astrophysics Data System (ADS)

    Aleksanin, A. I.; Kachur, V. A.

    2017-12-01

    Calculation errors in ocean-brightness coefficients in the Far Eastern are analyzed for two atmospheric correction algorithms (NIR and MUMM). The daylight measurements in different water types show that the main error component is systematic and has a simple dependence on the magnitudes of the coefficients. The causes of the error behavior are considered. The most probable explanation for the large errors in ocean-color parameters in the Far East is a high concentration of continental aerosol absorbing light. A comparison between satellite and in situ measurements at AERONET stations in the United States and South Korea has been made. It is shown the errors in these two regions differ by up to 10 times upon close water turbidity and relatively high aerosol optical-depth computation precision in the case of using the NIR correction of the atmospheric effect.

  12. The Surveillance Error Grid

    PubMed Central

    Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B.; Kirkman, M. Sue; Kovatchev, Boris

    2014-01-01

    Introduction: Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. Methods: A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. Results: SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to the data plotted on the CEG and PEG produced risk estimates that were more granular and reflective of a continuously increasing risk scale. Discussion: The SEG is a modern metric for clinical risk assessments of BG monitor errors that assigns a unique risk score to each monitor data point when compared to a reference value. The SEG allows the clinical accuracy of a BG monitor to be portrayed in many ways, including as the percentages of data points falling into custom-defined risk zones. For modeled data the SEG, compared with the CEG and PEG, allows greater precision for quantifying risk, especially when the risks are low. This tool will be useful to allow regulators and manufacturers to monitor and evaluate glucose monitor performance in their surveillance programs. PMID:25562886

  13. The surveillance error grid.

    PubMed

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to the data plotted on the CEG and PEG produced risk estimates that were more granular and reflective of a continuously increasing risk scale. The SEG is a modern metric for clinical risk assessments of BG monitor errors that assigns a unique risk score to each monitor data point when compared to a reference value. The SEG allows the clinical accuracy of a BG monitor to be portrayed in many ways, including as the percentages of data points falling into custom-defined risk zones. For modeled data the SEG, compared with the CEG and PEG, allows greater precision for quantifying risk, especially when the risks are low. This tool will be useful to allow regulators and manufacturers to monitor and evaluate glucose monitor performance in their surveillance programs. © 2014 Diabetes Technology Society.

  14. Anthropometric data error detecting and correction with a computer

    NASA Technical Reports Server (NTRS)

    Chesak, D. D.

    1981-01-01

    Data obtained with automated anthropometric data aquisition equipment was examined for short term errors. The least squares curve fitting technique was used to ascertain which data values were erroneous and to replace them, if possible, with corrected values. Errors were due to random reflections of light, masking of the light rays, and other types of optical and electrical interference. It was found that the signals were impossible to eliminate from the initial data produced by the television cameras, and that this was primarily a software problem requiring a digital computer to refine the data off line. The specific data of interest was related to the arm reach envelope of a human being.

  15. Prescription errors before and after introduction of electronic medication alert system in a pediatric emergency department.

    PubMed

    Sethuraman, Usha; Kannikeswaran, Nirupama; Murray, Kyle P; Zidan, Marwan A; Chamberlain, James M

    2015-06-01

    Prescription errors occur frequently in pediatric emergency departments (PEDs).The effect of computerized physician order entry (CPOE) with electronic medication alert system (EMAS) on these is unknown. The objective was to compare prescription errors rates before and after introduction of CPOE with EMAS in a PED. The hypothesis was that CPOE with EMAS would significantly reduce the rate and severity of prescription errors in the PED. A prospective comparison of a sample of outpatient, medication prescriptions 5 months before and after CPOE with EMAS implementation (7,268 before and 7,292 after) was performed. Error types and rates, alert types and significance, and physician response were noted. Medication errors were deemed significant if there was a potential to cause life-threatening injury, failure of therapy, or an adverse drug effect. There was a significant reduction in the errors per 100 prescriptions (10.4 before vs. 7.3 after; absolute risk reduction = 3.1, 95% confidence interval [CI] = 2.2 to 4.0). Drug dosing error rates decreased from 8 to 5.4 per 100 (absolute risk reduction = 2.6, 95% CI = 1.8 to 3.4). Alerts were generated for 29.6% of prescriptions, with 45% involving drug dose range checking. The sensitivity of CPOE with EMAS in identifying errors in prescriptions was 45.1% (95% CI = 40.8% to 49.6%), and the specificity was 57% (95% CI = 55.6% to 58.5%). Prescribers modified 20% of the dosing alerts, resulting in the error not reaching the patient. Conversely, 11% of true dosing alerts for medication errors were overridden by the prescribers: 88 (11.3%) resulted in medication errors, and 684 (88.6%) were false-positive alerts. A CPOE with EMAS was associated with a decrease in overall prescription errors in our PED. Further system refinements are required to reduce the high false-positive alert rates. © 2015 by the Society for Academic Emergency Medicine.

  16. A system to use electromagnetic tracking for the quality assurance of brachytherapy catheter digitization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Damato, Antonio L., E-mail: adamato@lroc.harvard.edu; Viswanathan, Akila N.; Don, Sarah M.

    2014-10-15

    Purpose: To investigate the use of a system using electromagnetic tracking (EMT), post-processing and an error-detection algorithm for detecting errors and resolving uncertainties in high-dose-rate brachytherapy catheter digitization for treatment planning. Methods: EMT was used to localize 15 catheters inserted into a phantom using a stepwise acquisition technique. Five distinct acquisition experiments were performed. Noise associated with the acquisition was calculated. The dwell location configuration was extracted from the EMT data. A CT scan of the phantom was performed, and five distinct catheter digitization sessions were performed. No a priori registration of the CT scan coordinate system with the EMTmore » coordinate system was performed. CT-based digitization was automatically extracted from the brachytherapy plan DICOM files (CT), and rigid registration was performed between EMT and CT dwell positions. EMT registration error was characterized in terms of the mean and maximum distance between corresponding EMT and CT dwell positions per catheter. An algorithm for error detection and identification was presented. Three types of errors were systematically simulated: swap of two catheter numbers, partial swap of catheter number identification for parts of the catheters (mix), and catheter-tip shift. Error-detection sensitivity (number of simulated scenarios correctly identified as containing an error/number of simulated scenarios containing an error) and specificity (number of scenarios correctly identified as not containing errors/number of correct scenarios) were calculated. Catheter identification sensitivity (number of catheters correctly identified as erroneous across all scenarios/number of erroneous catheters across all scenarios) and specificity (number of catheters correctly identified as correct across all scenarios/number of correct catheters across all scenarios) were calculated. The mean detected and identified shift was calculated. Results: The maximum noise ±1 standard deviation associated with the EMT acquisitions was 1.0 ± 0.1 mm, and the mean noise was 0.6 ± 0.1 mm. Registration of all the EMT and CT dwell positions was associated with a mean catheter error of 0.6 ± 0.2 mm, a maximum catheter error of 0.9 ± 0.4 mm, a mean dwell error of 1.0 ± 0.3 mm, and a maximum dwell error of 1.3 ± 0.7 mm. Error detection and catheter identification sensitivity and specificity of 100% were observed for swap, mix and shift (≥2.6 mm for error detection; ≥2.7 mm for catheter identification) errors. A mean detected shift of 1.8 ± 0.4 mm and a mean identified shift of 1.9 ± 0.4 mm were observed. Conclusions: Registration of the EMT dwell positions to the CT dwell positions was possible with a residual mean error per catheter of 0.6 ± 0.2 mm and a maximum error for any dwell of 1.3 ± 0.7 mm. These low residual registration errors show that quality assurance of the general characteristics of the catheters and of possible errors affecting one specific dwell position is possible. The sensitivity and specificity of the catheter digitization verification algorithm was 100% for swap and mix errors and for shifts ≥2.6 mm. On average, shifts ≥1.8 mm were detected, and shifts ≥1.9 mm were detected and identified.« less

  17. The Causes of Errors in Clinical Reasoning: Cognitive Biases, Knowledge Deficits, and Dual Process Thinking.

    PubMed

    Norman, Geoffrey R; Monteiro, Sandra D; Sherbino, Jonathan; Ilgen, Jonathan S; Schmidt, Henk G; Mamede, Silvia

    2017-01-01

    Contemporary theories of clinical reasoning espouse a dual processing model, which consists of a rapid, intuitive component (Type 1) and a slower, logical and analytical component (Type 2). Although the general consensus is that this dual processing model is a valid representation of clinical reasoning, the causes of diagnostic errors remain unclear. Cognitive theories about human memory propose that such errors may arise from both Type 1 and Type 2 reasoning. Errors in Type 1 reasoning may be a consequence of the associative nature of memory, which can lead to cognitive biases. However, the literature indicates that, with increasing expertise (and knowledge), the likelihood of errors decreases. Errors in Type 2 reasoning may result from the limited capacity of working memory, which constrains computational processes. In this article, the authors review the medical literature to answer two substantial questions that arise from this work: (1) To what extent do diagnostic errors originate in Type 1 (intuitive) processes versus in Type 2 (analytical) processes? (2) To what extent are errors a consequence of cognitive biases versus a consequence of knowledge deficits?The literature suggests that both Type 1 and Type 2 processes contribute to errors. Although it is possible to experimentally induce cognitive biases, particularly availability bias, the extent to which these biases actually contribute to diagnostic errors is not well established. Educational strategies directed at the recognition of biases are ineffective in reducing errors; conversely, strategies focused on the reorganization of knowledge to reduce errors have small but consistent benefits.

  18. Theoretical Study of the Conditions of Maximum Manifestation of the Error Due to Inhomogeneity of Thermocouple Legs

    NASA Astrophysics Data System (ADS)

    Liu, Zhigang; Song, Wenguang; Kochan, Orest; Mykyichuk, Mykola; Jun, Su

    2017-07-01

    The method of theoretical analysis of temperature ranges for the maximum manifestation of the error due to acquired thermoelectric inhomogeneity of thermocouple legs is proposed in this paper. The drift function of the reference function of a type K thermocouples in a ceramic insulation, that consisted of 1.2 mm diameter thermoelements after their exposure to 800°C for 10 000 h in an oxidizing atmosphere (air), is analyzed. The method takes into account various operating conditions to determine the optimal conditions for studying inhomogeneous thermocouples. The method can be applied for other types of thermocouples when taking into account their specific characteristics and the conditions that they have been exposed to.

  19. Types of Possible Survey Errors in Estimates Published in the Weekly Natural Gas Storage Report

    EIA Publications

    2016-01-01

    This document lists types of potential errors in EIA estimates published in the WNGSR. Survey errors are an unavoidable aspect of data collection. Error is inherent in all collected data, regardless of the source of the data and the care and competence of data collectors. The type and extent of error depends on the type and characteristics of the survey.

  20. Diagnostics of Robust Growth Curve Modeling Using Student's "t" Distribution

    ERIC Educational Resources Information Center

    Tong, Xin; Zhang, Zhiyong

    2012-01-01

    Growth curve models with different types of distributions of random effects and of intraindividual measurement errors for robust analysis are compared. After demonstrating the influence of distribution specification on parameter estimation, 3 methods for diagnosing the distributions for both random effects and intraindividual measurement errors…

  1. Systematic evidence review of rates and burden of harm of intravenous admixture drug preparation errors in healthcare settings

    PubMed Central

    Beer, Idal; Hoppe-Tichy, Torsten; Trbovich, Patricia

    2017-01-01

    Objective To examine published evidence on intravenous admixture preparation errors (IAPEs) in healthcare settings. Methods Searches were conducted in three electronic databases (January 2005 to April 2017). Publications reporting rates of IAPEs and error types were reviewed and categorised into the following groups: component errors, dose/calculation errors, aseptic technique errors and composite errors. The methodological rigour of each study was assessed using the Hawker method. Results Of the 34 articles that met inclusion criteria, 28 reported the site of IAPEs: central pharmacies (n=8), nursing wards (n=14), both settings (n=4) and other sites (n=3). Using the Hawker criteria, 14% of the articles were of good quality, 74% were of fair quality and 12% were of poor quality. Error types and reported rates varied substantially, including wrong drug (~0% to 4.7%), wrong diluent solution (0% to 49.0%), wrong label (0% to 99.0%), wrong dose (0% to 32.6%), wrong concentration (0.3% to 88.6%), wrong diluent volume (0.06% to 49.0%) and inadequate aseptic technique (0% to 92.7%)%). Four studies directly compared incidence by preparation site and/or method, finding error incidence to be lower for doses prepared within a central pharmacy versus the nursing ward and lower for automated preparation versus manual preparation. Although eight studies (24%) reported ≥1 errors with the potential to cause patient harm, no study directly linked IAPE occurrences to specific adverse patient outcomes. Conclusions The available data suggest a need to continue to optimise the intravenous preparation process, focus on improving preparation workflow, design and implement preventive strategies, train staff on optimal admixture protocols and implement standardisation. Future research should focus on the development of consistent error subtype definitions, standardised reporting methodology and reliable, reproducible methods to track and link risk factors with the burden of harm associated with these errors. PMID:29288174

  2. Quality Assurance of NCI Thesaurus by Mining Structural-Lexical Patterns

    PubMed Central

    Abeysinghe, Rashmie; Brooks, Michael A.; Talbert, Jeffery; Licong, Cui

    2017-01-01

    Quality assurance of biomedical terminologies such as the National Cancer Institute (NCI) Thesaurus is an essential part of the terminology management lifecycle. We investigate a structural-lexical approach based on non-lattice subgraphs to automatically identify missing hierarchical relations and missing concepts in the NCI Thesaurus. We mine six structural-lexical patterns exhibiting in non-lattice subgraphs: containment, union, intersection, union-intersection, inference-contradiction, and inference union. Each pattern indicates a potential specific type of error and suggests a potential type of remediation. We found 809 non-lattice subgraphs with these patterns in the NCI Thesaurus (version 16.12d). Domain experts evaluated a random sample of 50 small non-lattice subgraphs, of which 33 were confirmed to contain errors and make correct suggestions (33/50 = 66%). Of the 25 evaluated subgraphs revealing multiple patterns, 22 were verified correct (22/25 = 88%). This shows the effectiveness of our structurallexical-pattern-based approach in detecting errors and suggesting remediations in the NCI Thesaurus. PMID:29854100

  3. Evaluation of a UMLS Auditing Process of Semantic Type Assignments

    PubMed Central

    Gu, Huanying; Hripcsak, George; Chen, Yan; Morrey, C. Paul; Elhanan, Gai; Cimino, James J.; Geller, James; Perl, Yehoshua

    2007-01-01

    The UMLS is a terminological system that integrates many source terminologies. Each concept in the UMLS is assigned one or more semantic types from the Semantic Network, an upper level ontology for biomedicine. Due to the complexity of the UMLS, errors exist in the semantic type assignments. Finding assignment errors may unearth modeling errors. Even with sophisticated tools, discovering assignment errors requires manual review. In this paper we describe the evaluation of an auditing project of UMLS semantic type assignments. We studied the performance of the auditors who reviewed potential errors. We found that four auditors, interacting according to a multi-step protocol, identified a high rate of errors (one or more errors in 81% of concepts studied) and that results were sufficiently reliable (0.67 to 0.70) for the two most common types of errors. However, reliability was low for each individual auditor, suggesting that review of potential errors is resource-intensive. PMID:18693845

  4. Mapping the EORTC QLQ-C30 onto the EQ-5D-3L: assessing the external validity of existing mapping algorithms.

    PubMed

    Doble, Brett; Lorgelly, Paula

    2016-04-01

    To determine the external validity of existing mapping algorithms for predicting EQ-5D-3L utility values from EORTC QLQ-C30 responses and to establish their generalizability in different types of cancer. A main analysis (pooled) sample of 3560 observations (1727 patients) and two disease severity patient samples (496 and 93 patients) with repeated observations over time from Cancer 2015 were used to validate the existing algorithms. Errors were calculated between observed and predicted EQ-5D-3L utility values using a single pooled sample and ten pooled tumour type-specific samples. Predictive accuracy was assessed using mean absolute error (MAE) and standardized root-mean-squared error (RMSE). The association between observed and predicted EQ-5D utility values and other covariates across the distribution was tested using quantile regression. Quality-adjusted life years (QALYs) were calculated using observed and predicted values to test responsiveness. Ten 'preferred' mapping algorithms were identified. Two algorithms estimated via response mapping and ordinary least-squares regression using dummy variables performed well on number of validation criteria, including accurate prediction of the best and worst QLQ-C30 health states, predicted values within the EQ-5D tariff range, relatively small MAEs and RMSEs, and minimal differences between estimated QALYs. Comparison of predictive accuracy across ten tumour type-specific samples highlighted that algorithms are relatively insensitive to grouping by tumour type and affected more by differences in disease severity. Two of the 'preferred' mapping algorithms suggest more accurate predictions, but limitations exist. We recommend extensive scenario analyses if mapped utilities are used in cost-utility analyses.

  5. Error rates in forensic DNA analysis: definition, numbers, impact and communication.

    PubMed

    Kloosterman, Ate; Sjerps, Marjan; Quak, Astrid

    2014-09-01

    Forensic DNA casework is currently regarded as one of the most important types of forensic evidence, and important decisions in intelligence and justice are based on it. However, errors occasionally occur and may have very serious consequences. In other domains, error rates have been defined and published. The forensic domain is lagging behind concerning this transparency for various reasons. In this paper we provide definitions and observed frequencies for different types of errors at the Human Biological Traces Department of the Netherlands Forensic Institute (NFI) over the years 2008-2012. Furthermore, we assess their actual and potential impact and describe how the NFI deals with the communication of these numbers to the legal justice system. We conclude that the observed relative frequency of quality failures is comparable to studies from clinical laboratories and genetic testing centres. Furthermore, this frequency is constant over the five-year study period. The most common causes of failures related to the laboratory process were contamination and human error. Most human errors could be corrected, whereas gross contamination in crime samples often resulted in irreversible consequences. Hence this type of contamination is identified as the most significant source of error. Of the known contamination incidents, most were detected by the NFI quality control system before the report was issued to the authorities, and thus did not lead to flawed decisions like false convictions. However in a very limited number of cases crucial errors were detected after the report was issued, sometimes with severe consequences. Many of these errors were made in the post-analytical phase. The error rates reported in this paper are useful for quality improvement and benchmarking, and contribute to an open research culture that promotes public trust. However, they are irrelevant in the context of a particular case. Here case-specific probabilities of undetected errors are needed. These should be reported, separately from the match probability, when requested by the court or when there are internal or external indications for error. It should also be made clear that there are various other issues to consider, like DNA transfer. Forensic statistical models, in particular Bayesian networks, may be useful to take the various uncertainties into account and demonstrate their effects on the evidential value of the forensic DNA results. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  6. Validation of prostate-specific antigen laboratory values recorded in Surveillance, Epidemiology, and End Results registries.

    PubMed

    Adamo, Margaret Peggy; Boten, Jessica A; Coyle, Linda M; Cronin, Kathleen A; Lam, Clara J K; Negoita, Serban; Penberthy, Lynne; Stevens, Jennifer L; Ward, Kevin C

    2017-02-15

    Researchers have used prostate-specific antigen (PSA) values collected by central cancer registries to evaluate tumors for potential aggressive clinical disease. An independent study collecting PSA values suggested a high error rate (18%) related to implied decimal points. To evaluate the error rate in the Surveillance, Epidemiology, and End Results (SEER) program, a comprehensive review of PSA values recorded across all SEER registries was performed. Consolidated PSA values for eligible prostate cancer cases in SEER registries were reviewed and compared with text documentation from abstracted records. Four types of classification errors were identified: implied decimal point errors, abstraction or coding implementation errors, nonsignificant errors, and changes related to "unknown" values. A total of 50,277 prostate cancer cases diagnosed in 2012 were reviewed. Approximately 94.15% of cases did not have meaningful changes (85.85% correct, 5.58% with a nonsignificant change of <1 ng/mL, and 2.80% with no clinical change). Approximately 5.70% of cases had meaningful changes (1.93% due to implied decimal point errors, 1.54% due to abstract or coding errors, and 2.23% due to errors related to unknown categories). Only 419 of the original 50,277 cases (0.83%) resulted in a change in disease stage due to a corrected PSA value. The implied decimal error rate was only 1.93% of all cases in the current validation study, with a meaningful error rate of 5.81%. The reasons for the lower error rate in SEER are likely due to ongoing and rigorous quality control and visual editing processes by the central registries. The SEER program currently is reviewing and correcting PSA values back to 2004 and will re-release these data in the public use research file. Cancer 2017;123:697-703. © 2016 American Cancer Society. © 2016 The Authors. Cancer published by Wiley Periodicals, Inc. on behalf of American Cancer Society.

  7. A model of memory impairment in schizophrenia: cognitive and clinical factors associated with memory efficiency and memory errors.

    PubMed

    Brébion, Gildas; Bressan, Rodrigo A; Ohlsen, Ruth I; David, Anthony S

    2013-12-01

    Memory impairments in patients with schizophrenia have been associated with various cognitive and clinical factors. Hallucinations have been more specifically associated with errors stemming from source monitoring failure. We conducted a broad investigation of verbal memory and visual memory as well as source memory functioning in a sample of patients with schizophrenia. Various memory measures were tallied, and we studied their associations with processing speed, working memory span, and positive, negative, and depressive symptoms. Superficial and deep memory processes were differentially associated with processing speed, working memory span, avolition, depression, and attention disorders. Auditory/verbal and visual hallucinations were differentially associated with specific types of source memory error. We integrated all the results into a revised version of a previously published model of memory functioning in schizophrenia. The model describes the factors that affect memory efficiency, as well as the cognitive underpinnings of hallucinations within the source monitoring framework. © 2013.

  8. Personal digital assistant-based drug information sources: potential to improve medication safety.

    PubMed

    Galt, Kimberly A; Rule, Ann M; Houghton, Bruce; Young, Daniel O; Remington, Gina

    2005-04-01

    This study compared the potential for personal digital assistant (PDA)-based drug information sources to minimize potential medication errors dependent on accurate and complete drug information at the point of care. A quality and safety framework for drug information resources was developed to evaluate 11 PDA-based drug information sources. Three drug information sources met the criteria of the framework: Eprocrates Rx Pro, Lexi-Drugs, and mobileMICROMEDEX. Medication error types related to drug information at the point of care were then determined. Forty-seven questions were developed to test the potential of the sources to prevent these error types. Pharmacists and physician experts from Creighton University created these questions based on the most common types of questions asked by primary care providers. Three physicians evaluated the drug information sources, rating the source for each question: 1=no information available, 2=some information available, or 3 = adequate amount of information available. The mean ratings for the drug information sources were: 2.0 (Eprocrates Rx Pro), 2.5 (Lexi-Drugs), and 2.03 (mobileMICROMEDEX). Lexi-Drugs was significantly better (mobileMICROMEDEX t test; P=0.05; Eprocrates Rx Pro t test; P=0.01). Lexi-Drugs was found to be the most specific and complete PDA resource available to optimize medication safety by reducing potential errors associated with drug information. No resource was sufficient to address the patient safety information needs for all cases.

  9. Author Correction: Single-nucleus analysis of accessible chromatin in developing mouse forebrain reveals cell-type-specific transcriptional regulation.

    PubMed

    Preissl, Sebastian; Fang, Rongxin; Huang, Hui; Zhao, Yuan; Raviram, Ramya; Gorkin, David U; Zhang, Yanxiao; Sos, Brandon C; Afzal, Veena; Dickel, Diane E; Kuan, Samantha; Visel, Axel; Pennacchio, Len A; Zhang, Kun; Ren, Bing

    2018-03-01

    In the version of this article initially published online, the accession code was given as GSE1000333. The correct code is GSE100033. The error has been corrected in the print, HTML and PDF versions of the article.

  10. Selection Practices of Group Leaders: A National Survey.

    ERIC Educational Resources Information Center

    Riva, Maria T.; Lippert, Laurel; Tackett, M. Jan

    2000-01-01

    Study surveys the selection practices of group leaders. Explores methods of selection, variables used to make selection decisions, and the types of selection errors that leaders have experienced. Results suggest that group leaders use clinical judgment to make selection decisions and endorse using some specific variables in selection. (Contains 22…

  11. An error taxonomy system for analysis of haemodialysis incidents.

    PubMed

    Gu, Xiuzhu; Itoh, Kenji; Suzuki, Satoshi

    2014-12-01

    This paper describes the development of a haemodialysis error taxonomy system for analysing incidents and predicting the safety status of a dialysis organisation. The error taxonomy system was developed by adapting an error taxonomy system which assumed no specific specialty to haemodialysis situations. Its application was conducted with 1,909 incident reports collected from two dialysis facilities in Japan. Over 70% of haemodialysis incidents were reported as problems or complications related to dialyser, circuit, medication and setting of dialysis condition. Approximately 70% of errors took place immediately before and after the four hours of haemodialysis therapy. Error types most frequently made in the dialysis unit were omission and qualitative errors. Failures or complications classified to staff human factors, communication, task and organisational factors were found in most dialysis incidents. Device/equipment/materials, medicine and clinical documents were most likely to be involved in errors. Haemodialysis nurses were involved in more incidents related to medicine and documents, whereas dialysis technologists made more errors with device/equipment/materials. This error taxonomy system is able to investigate incidents and adverse events occurring in the dialysis setting but is also able to estimate safety-related status of an organisation, such as reporting culture. © 2014 European Dialysis and Transplant Nurses Association/European Renal Care Association.

  12. Typing mineral deposits using their grades and tonnages in an artificial neural network

    USGS Publications Warehouse

    Singer, Donald A.; Kouda, Ryoichi

    2003-01-01

    A test of the ability of a probabilistic neural network to classify deposits into types on the basis of deposit tonnage and average Cu, Mo, Ag, Au, Zn, and Pb grades is conducted. The purpose is to examine whether this type of system might serve as a basis for integrating geoscience information available in large mineral databases to classify sites by deposit type. Benefits of proper classification of many sites in large regions are relatively rapid identification of terranes permissive for deposit types and recognition of specific sites perhaps worthy of exploring further.Total tonnages and average grades of 1,137 well-explored deposits identified in published grade and tonnage models representing 13 deposit types were used to train and test the network. Tonnages were transformed by logarithms and grades by square roots to reduce effects of skewness. All values were scaled by subtracting the variable's mean and dividing by its standard deviation. Half of the deposits were selected randomly to be used in training the probabilistic neural network and the other half were used for independent testing. Tests were performed with a probabilistic neural network employing a Gaussian kernel and separate sigma weights for each class (type) and each variable (grade or tonnage).Deposit types were selected to challenge the neural network. For many types, tonnages or average grades are significantly different from other types, but individual deposits may plot in the grade and tonnage space of more than one type. Porphyry Cu, porphyry Cu-Au, and porphyry Cu-Mo types have similar tonnages and relatively small differences in grades. Redbed Cu deposits typically have tonnages that could be confused with porphyry Cu deposits, also contain Cu and, in some situations, Ag. Cyprus and kuroko massive sulfide types have about the same tonnages. Cu, Zn, Ag, and Au grades. Polymetallic vein, sedimentary exhalative Zn-Pb, and Zn-Pb skarn types contain many of the same metals. Sediment-hosted Au, Comstock Au-Ag, and low-sulfide Au-quartz vein types are principally Au deposits with differing amounts of Ag.Given the intent to test the neural network under the most difficult conditions, an overall 75% agreement between the experts and the neural network is considered excellent. Among the largestclassification errors are skarn Zn-Pb and Cyprus massive sulfide deposits classed by the neuralnetwork as kuroko massive sulfides—24 and 63% error respectively. Other large errors are the classification of 92% of porphyry Cu-Mo as porphyry Cu deposits. Most of the larger classification errors involve 25 or fewer training deposits, suggesting that some errors might be the result of small sample size. About 91% of the gold deposit types were classed properly and 98% of porphyry Cu deposits were classes as some type of porphyry Cu deposit. An experienced economic geologist would not make many of the classification errors that were made by the neural network because the geologic settings of deposits would be used to reduce errors. In a separate test, the probabilistic neural network correctly classed 93% of 336 deposits in eight deposit types when trained with presence or absence of 58 minerals and six generalized rock types. The overall success rate of the probabilistic neural network when trained on tonnage and average grades would probably be more than 90% with additional information on the presence of a few rock types.

  13. Simultaneous Control of Error Rates in fMRI Data Analysis

    PubMed Central

    Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David

    2015-01-01

    The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the Likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to ‘cleaner’ looking brain maps and operationally superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain. PMID:26272730

  14. User type certification for advanced flight control systems

    NASA Technical Reports Server (NTRS)

    Gilson, Richard D.; Abbott, David W.

    1994-01-01

    Advanced avionics through flight management systems (FMS) coupled with autopilots can now precisely control aircraft from takeoff to landing. Clearly, this has been the most important improvement in aircraft since the jet engine. Regardless of the eventual capabilities of this technology, it is doubtful that society will soon accept pilotless airliners with the same aplomb they accept driverless passenger trains. Flight crews are still needed to deal with inputing clearances, taxiing, in-flight rerouting, unexpected weather decisions, and emergencies; yet it is well known that the contribution of human errors far exceed those of current hardware or software systems. Thus human errors remain, and are even increasing in percentage as the largest contributor to total system error. Currently, the flight crew is regulated by a layered system of certification: by operation, e.g., airline transport pilot versus private pilot; by category, e.g., airplane versus helicopter; by class, e.g., single engine land versus multi-engine land; and by type (for larger aircraft and jet powered aircraft), e.g., Boeing 767 or Airbus A320. Nothing in the certification process now requires an in-depth proficiency with specific types of avionics systems despite their prominent role in aircraft control and guidance.

  15. Further Improvements to Linear Mixed Models for Genome-Wide Association Studies

    PubMed Central

    Widmer, Christian; Lippert, Christoph; Weissbrod, Omer; Fusi, Nicolo; Kadie, Carl; Davidson, Robert; Listgarten, Jennifer; Heckerman, David

    2014-01-01

    We examine improvements to the linear mixed model (LMM) that better correct for population structure and family relatedness in genome-wide association studies (GWAS). LMMs rely on the estimation of a genetic similarity matrix (GSM), which encodes the pairwise similarity between every two individuals in a cohort. These similarities are estimated from single nucleotide polymorphisms (SNPs) or other genetic variants. Traditionally, all available SNPs are used to estimate the GSM. In empirical studies across a wide range of synthetic and real data, we find that modifications to this approach improve GWAS performance as measured by type I error control and power. Specifically, when only population structure is present, a GSM constructed from SNPs that well predict the phenotype in combination with principal components as covariates controls type I error and yields more power than the traditional LMM. In any setting, with or without population structure or family relatedness, a GSM consisting of a mixture of two component GSMs, one constructed from all SNPs and another constructed from SNPs that well predict the phenotype again controls type I error and yields more power than the traditional LMM. Software implementing these improvements and the experimental comparisons are available at http://microsoft.com/science. PMID:25387525

  16. Further Improvements to Linear Mixed Models for Genome-Wide Association Studies

    NASA Astrophysics Data System (ADS)

    Widmer, Christian; Lippert, Christoph; Weissbrod, Omer; Fusi, Nicolo; Kadie, Carl; Davidson, Robert; Listgarten, Jennifer; Heckerman, David

    2014-11-01

    We examine improvements to the linear mixed model (LMM) that better correct for population structure and family relatedness in genome-wide association studies (GWAS). LMMs rely on the estimation of a genetic similarity matrix (GSM), which encodes the pairwise similarity between every two individuals in a cohort. These similarities are estimated from single nucleotide polymorphisms (SNPs) or other genetic variants. Traditionally, all available SNPs are used to estimate the GSM. In empirical studies across a wide range of synthetic and real data, we find that modifications to this approach improve GWAS performance as measured by type I error control and power. Specifically, when only population structure is present, a GSM constructed from SNPs that well predict the phenotype in combination with principal components as covariates controls type I error and yields more power than the traditional LMM. In any setting, with or without population structure or family relatedness, a GSM consisting of a mixture of two component GSMs, one constructed from all SNPs and another constructed from SNPs that well predict the phenotype again controls type I error and yields more power than the traditional LMM. Software implementing these improvements and the experimental comparisons are available at http://microsoft.com/science.

  17. Further improvements to linear mixed models for genome-wide association studies.

    PubMed

    Widmer, Christian; Lippert, Christoph; Weissbrod, Omer; Fusi, Nicolo; Kadie, Carl; Davidson, Robert; Listgarten, Jennifer; Heckerman, David

    2014-11-12

    We examine improvements to the linear mixed model (LMM) that better correct for population structure and family relatedness in genome-wide association studies (GWAS). LMMs rely on the estimation of a genetic similarity matrix (GSM), which encodes the pairwise similarity between every two individuals in a cohort. These similarities are estimated from single nucleotide polymorphisms (SNPs) or other genetic variants. Traditionally, all available SNPs are used to estimate the GSM. In empirical studies across a wide range of synthetic and real data, we find that modifications to this approach improve GWAS performance as measured by type I error control and power. Specifically, when only population structure is present, a GSM constructed from SNPs that well predict the phenotype in combination with principal components as covariates controls type I error and yields more power than the traditional LMM. In any setting, with or without population structure or family relatedness, a GSM consisting of a mixture of two component GSMs, one constructed from all SNPs and another constructed from SNPs that well predict the phenotype again controls type I error and yields more power than the traditional LMM. Software implementing these improvements and the experimental comparisons are available at http://microsoft.com/science.

  18. Transmission and storage of medical images with patient information.

    PubMed

    Acharya U, Rajendra; Subbanna Bhat, P; Kumar, Sathish; Min, Lim Choo

    2003-07-01

    Digital watermarking is a technique of hiding specific identification data for copyright authentication. This technique is adapted here for interleaving patient information with medical images, to reduce storage and transmission overheads. The text data is encrypted before interleaving with images to ensure greater security. The graphical signals are interleaved with the image. Two types of error control-coding techniques are proposed to enhance reliability of transmission and storage of medical images interleaved with patient information. Transmission and storage scenarios are simulated with and without error control coding and a qualitative as well as quantitative interpretation of the reliability enhancement resulting from the use of various commonly used error control codes such as repetitive, and (7,4) Hamming code is provided.

  19. Left neglect dyslexia: Perseveration and reading error types.

    PubMed

    Ronchi, Roberta; Algeri, Lorella; Chiapella, Laura; Gallucci, Marcello; Spada, Maria Simonetta; Vallar, Giuseppe

    2016-08-01

    Right-brain-damaged patients may show a reading disorder termed neglect dyslexia. Patients with left neglect dyslexia omit letters on the left-hand-side (the beginning, when reading left-to-right) part of the letter string, substitute them with other letters, and add letters to the left of the string. The aim of this study was to investigate the pattern of association, if any, between error types in patients with left neglect dyslexia and recurrent perseveration (a productive visuo-motor deficit characterized by addition of marks) in target cancellation. Specifically, we aimed at assessing whether different productive symptoms (relative to the reading and the visuo-motor domains) could be associated in patients with left spatial neglect. Fifty-four right-brain-damaged patients took part in the study: 50 out of the 54 patients showed left spatial neglect, with 27 of them also exhibiting left neglect dyslexia. Neglect dyslexic patients who showed perseveration produced mainly substitution neglect errors in reading. Conversely, omissions were the prevailing reading error pattern in neglect dyslexic patients without perseveration. Addition reading errors were much infrequent. Different functional pathological mechanisms may underlie omission and substitution reading errors committed by right-brain-damaged patients with left neglect dyslexia. One such mechanism, involving the defective stopping of inappropriate responses, may contribute to both recurrent perseveration in target cancellation, and substitution errors in reading. Productive pathological phenomena, together with deficits of spatial attention to events taking place on the left-hand-side of space, shape the manifestations of neglect dyslexia, and, more generally, of spatial neglect. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Identifying types and causes of errors in mortality data in a clinical registry using multiple information systems.

    PubMed

    Koetsier, Antonie; Peek, Niels; de Keizer, Nicolette

    2012-01-01

    Errors may occur in the registration of in-hospital mortality, making it less reliable as a quality indicator. We assessed the types of errors made in in-hospital mortality registration in the clinical quality registry National Intensive Care Evaluation (NICE) by comparing its mortality data to data from a national insurance claims database. Subsequently, we performed site visits at eleven Intensive Care Units (ICUs) to investigate the number, types and causes of errors made in in-hospital mortality registration. A total of 255 errors were found in the NICE registry. Two different types of software malfunction accounted for almost 80% of the errors. The remaining 20% were five types of manual transcription errors and human failures to record outcome data. Clinical registries should be aware of the possible existence of errors in recorded outcome data and understand their causes. In order to prevent errors, we recommend to thoroughly verify the software that is used in the registration process.

  1. VizieR Online Data Catalog: Spitzer Atlas of Stellar Spectra (SASS) (Ardila+, 2010)

    NASA Astrophysics Data System (ADS)

    Ardila, D. R.; van Dyk, S. D.; Makowiecki, W.; Stauffer, J.; Song, I.; Rho, J.; Fajardo-Acosta, S.; Hoard, D. W.; Wachter, S.

    2010-11-01

    From IRS Staring observations in the Spitzer archive we selected those stellar targets that had been observed with all the low-resolution IRS modules. We did not include known young stars with circumstellar material, stars known to harbor debris disks, or objects classified in SIMBAD as RS CVn, Be stars, or eclipsing binaries. We have also avoided classes already fully described with IRAS, ISO, or Spitzer, such as Asymptotic Giant Branch stars and rejected targets presenting IR excesses. However, note that in the case of very massive and/or evolved stars there are few objects presenting a pure photospheric spectrum. A few stars are specifically selected for their intrinsic interest regardless of their IR excess and even if the Atlas already contained another star with the same spectral type. The spectral coverage only reaches to 14um in the case of very late spectral classes (late M, L and T dwarfs) and some WR stars for which the long wavelength modules are unusable or not present in the archive. The spectral types have been taken from (in order of priority): * NStED (http://nsted.ipac.caltech.edu/), * NStars (http://nstars.nau.edu/nau_nstars/about.htm), * the Tycho-2 Spectral Type Catalog (Cat. III/231) * SIMBAD. For certain types of objects, we have used specialized catalogs as the source of the spectral types. The data were processed with the Spitzer Science Center S18.7.0 pipelined and corrected for teardrop effects, slit position uncertainties, residual flat-field errors, residual model errors, 24um flux deficit (1), fringing, and order mismatches. The Atlas files contain an error value for each wavelength, intended to represent the random 1sig error at that wavelength. This is the error provided by the SSC's S18.7.0 pipeline and propagated along the reduction procedure. The treatment of errors remains incomplete in this pipeline (2). The errors provided here should be considered carefully, before propagating them into further calculations. However, the processing insures that the spectra do not have strong spurious emission or absorption lines in large signal-to-noise regions. (1) http://ssc.spitzer.caltech.edu/irs/irsinstrumenthandbook/102/ #Toc253561116 (2) http://ssc.spitzer.caltech.edu/irs/irsinstrumenthandbook/ (4 data files).

  2. Some practical problems in implementing randomization.

    PubMed

    Downs, Matt; Tucker, Kathryn; Christ-Schmidt, Heidi; Wittes, Janet

    2010-06-01

    While often theoretically simple, implementing randomization to treatment in a masked, but confirmable, fashion can prove difficult in practice. At least three categories of problems occur in randomization: (1) bad judgment in the choice of method, (2) design and programming errors in implementing the method, and (3) human error during the conduct of the trial. This article focuses on these latter two types of errors, dealing operationally with what can go wrong after trial designers have selected the allocation method. We offer several case studies and corresponding recommendations for lessening the frequency of problems in allocating treatment or for mitigating the consequences of errors. Recommendations include: (1) reviewing the randomization schedule before starting a trial, (2) being especially cautious of systems that use on-demand random number generators, (3) drafting unambiguous randomization specifications, (4) performing thorough testing before entering a randomization system into production, (5) maintaining a dataset that captures the values investigators used to randomize participants, thereby allowing the process of treatment allocation to be reproduced and verified, (6) resisting the urge to correct errors that occur in individual treatment assignments, (7) preventing inadvertent unmasking to treatment assignments in kit allocations, and (8) checking a sample of study drug kits to allow detection of errors in drug packaging and labeling. Although we performed a literature search of documented randomization errors, the examples that we provide and the resultant recommendations are based largely on our own experience in industry-sponsored clinical trials. We do not know how representative our experience is or how common errors of the type we have seen occur. Our experience underscores the importance of verifying the integrity of the treatment allocation process before and during a trial. Clinical Trials 2010; 7: 235-245. http://ctj.sagepub.com.

  3. Comparing errors in Medicaid reporting across surveys: evidence to date.

    PubMed

    Call, Kathleen T; Davern, Michael E; Klerman, Jacob A; Lynch, Victoria

    2013-04-01

    To synthesize evidence on the accuracy of Medicaid reporting across state and federal surveys. All available validation studies. Compare results from existing research to understand variation in reporting across surveys. Synthesize all available studies validating survey reports of Medicaid coverage. Across all surveys, reporting some type of insurance coverage is better than reporting Medicaid specifically. Therefore, estimates of uninsurance are less biased than estimates of specific sources of coverage. The CPS stands out as being particularly inaccurate. Measuring health insurance coverage is prone to some level of error, yet survey overstatements of uninsurance are modest in most surveys. Accounting for all forms of bias is complex. Researchers should consider adjusting estimates of Medicaid and uninsurance in surveys prone to high levels of misreporting. © Health Research and Educational Trust.

  4. On the interaction of deaffrication and consonant harmony*

    PubMed Central

    Dinnsen, Daniel A.; Gierut, Judith A.; Morrisette, Michele L.; Green, Christopher R.; Farris-Trimble, Ashley W.

    2010-01-01

    Error patterns in children’s phonological development are often described as simplifying processes that can interact with one another with different consequences. Some interactions limit the applicability of an error pattern, and others extend it to more words. Theories predict that error patterns interact to their full potential. While specific interactions have been documented for certain pairs of processes, no developmental study has shown that the range of typologically predicted interactions occurs for those processes. To determine whether this anomaly is an accidental gap or a systematic peculiarity of particular error patterns, two commonly occurring processes were considered, namely Deaffrication and Consonant Harmony. Results are reported from a cross-sectional and longitudinal study of 12 children (age 3;0 – 5;0) with functional phonological delays. Three interaction types were attested to varying degrees. The longitudinal results further instantiated the typology and revealed a characteristic trajectory of change. Implications of these findings are explored. PMID:20513256

  5. Irradiance measurement errors due to the assumption of a Lambertian reference panel

    NASA Technical Reports Server (NTRS)

    Kimes, D. S.; Kirchner, J. A.

    1982-01-01

    A technique is presented for determining the error in diurnal irradiance measurements that results from the non-Lambertian behavior of a reference panel under various irradiance conditions. Spectral biconical reflectance factors of a spray-painted barium sulfate panel, along with simulated sky radiance data for clear and hazy skies at six solar zenith angles, were used to calculate the estimated panel irradiances and true irradiances for a nadir-looking sensor in two wavelength bands. The inherent errors in total spectral irradiance (0.68 microns) for a clear sky were 0.60, 6.0, 13.0, and 27.0% for solar zenith angles of 0, 45, 60, and 75 deg, respectively. The technique can be used to characterize the error of a specific panel used in field measurements, and thus eliminate any ambiguity of the effects of the type, preparation, and aging of the paint.

  6. Dopamine prediction errors in reward learning and addiction: from theory to neural circuitry

    PubMed Central

    Keiflin, Ronald; Janak, Patricia H.

    2015-01-01

    Summary Midbrain dopamine (DA) neurons are proposed to signal reward prediction error (RPE), a fundamental parameter in associative learning models. This RPE hypothesis provides a compelling theoretical framework for understanding DA function in reward learning and addiction. New studies support a causal role for DA-mediated RPE activity in promoting learning about natural reward; however, this question has not been explicitly tested in the context of drug addiction. In this review, we integrate theoretical models with experimental findings on the activity of DA systems, and on the causal role of specific neuronal projections and cell types, to provide a circuit-based framework for probing DA-RPE function in addiction. By examining error-encoding DA neurons in the neural network in which they are embedded, hypotheses regarding circuit-level adaptations that possibly contribute to pathological error-signaling and addiction can be formulated and tested. PMID:26494275

  7. The Error in Total Error Reduction

    PubMed Central

    Witnauer, James E.; Urcelay, Gonzalo P.; Miller, Ralph R.

    2013-01-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modelling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. PMID:23891930

  8. Factors correlated with traffic accidents as a basis for evaluating Advanced Driver Assistance Systems.

    PubMed

    Staubach, Maria

    2009-09-01

    This study aims to identify factors which influence and cause errors in traffic accidents and to use these as a basis for information to guide the application and design of driver assistance systems. A total of 474 accidents were examined in depth for this study by means of a psychological survey, data from accident reports, and technical reconstruction information. An error analysis was subsequently carried out, taking into account the driver, environment, and vehicle sub-systems. Results showed that all accidents were influenced by errors as a consequence of distraction and reduced activity. For crossroad accidents, there were further errors resulting from sight obstruction, masked stimuli, focus errors, and law infringements. Lane departure crashes were additionally caused by errors as a result of masked stimuli, law infringements, expectation errors as well as objective and action slips, while same direction accidents occurred additionally because of focus errors, expectation errors, and objective and action slips. Most accidents were influenced by multiple factors. There is a safety potential for Advanced Driver Assistance Systems (ADAS), which support the driver in information assimilation and help to avoid distraction and reduced activity. The design of the ADAS is dependent on the specific influencing factors of the accident type.

  9. Medical error identification, disclosure, and reporting: do emergency medicine provider groups differ?

    PubMed

    Hobgood, Cherri; Weiner, Bryan; Tamayo-Sarver, Joshua H

    2006-04-01

    To determine if the three types of emergency medicine providers--physicians, nurses, and out-of-hospital providers (emergency medical technicians [EMTs])--differ in their identification, disclosure, and reporting of medical error. A convenience sample of providers in an academic emergency department evaluated ten case vignettes that represented two error types (medication and cognitive) and three severity levels. For each vignette, providers were asked the following: 1) Is this an error? 2) Would you tell the patient? 3) Would you report this to a hospital committee? To assess differences in identification, disclosure, and reporting by provider type, error type, and error severity, the authors constructed three-way tables with the nonparametric Somers' D clustered on participant. To assess the contribution of disclosure instruction and environmental variables, fixed-effects regression stratified by provider type was used. Of the 116 providers who were eligible, 103 (40 physicians, 26 nurses, and 35 EMTs) had complete data. Physicians were more likely to classify an event as an error (78%) than nurses (71%; p = 0.04) or EMTs (68%; p < 0.01). Nurses were less likely to disclose an error to the patient (59%) than physicians (71%; p = 0.04). Physicians were the least likely to report the error (54%) compared with nurses (68%; p = 0.02) or EMTs (78%; p < 0.01). For all provider and error types, identification, disclosure, and reporting increased with increasing severity. Improving patient safety hinges on the ability of health care providers to accurately identify, disclose, and report medical errors. Interventions must account for differences in error identification, disclosure, and reporting by provider type.

  10. Comparative Cost-Effectiveness Analysis of Three Different Automated Medication Systems Implemented in a Danish Hospital Setting.

    PubMed

    Risør, Bettina Wulff; Lisby, Marianne; Sørensen, Jan

    2018-02-01

    Automated medication systems have been found to reduce errors in the medication process, but little is known about the cost-effectiveness of such systems. The objective of this study was to perform a model-based indirect cost-effectiveness comparison of three different, real-world automated medication systems compared with current standard practice. The considered automated medication systems were a patient-specific automated medication system (psAMS), a non-patient-specific automated medication system (npsAMS), and a complex automated medication system (cAMS). The economic evaluation used original effect and cost data from prospective, controlled, before-and-after studies of medication systems implemented at a Danish hematological ward and an acute medical unit. Effectiveness was described as the proportion of clinical and procedural error opportunities that were associated with one or more errors. An error was defined as a deviation from the electronic prescription, from standard hospital policy, or from written procedures. The cost assessment was based on 6-month standardization of observed cost data. The model-based comparative cost-effectiveness analyses were conducted with system-specific assumptions of the effect size and costs in scenarios with consumptions of 15,000, 30,000, and 45,000 doses per 6-month period. With 30,000 doses the cost-effectiveness model showed that the cost-effectiveness ratio expressed as the cost per avoided clinical error was €24 for the psAMS, €26 for the npsAMS, and €386 for the cAMS. Comparison of the cost-effectiveness of the three systems in relation to different valuations of an avoided error showed that the psAMS was the most cost-effective system regardless of error type or valuation. The model-based indirect comparison against the conventional practice showed that psAMS and npsAMS were more cost-effective than the cAMS alternative, and that psAMS was more cost-effective than npsAMS.

  11. Evaluation of near-miss and adverse events in radiation oncology using a comprehensive causal factor taxonomy.

    PubMed

    Spraker, Matthew B; Fain, Robert; Gopan, Olga; Zeng, Jing; Nyflot, Matthew; Jordan, Loucille; Kane, Gabrielle; Ford, Eric

    Incident learning systems (ILSs) are a popular strategy for improving safety in radiation oncology (RO) clinics, but few reports focus on the causes of errors in RO. The goal of this study was to test a causal factor taxonomy developed in 2012 by the American Association of Physicists in Medicine and adopted for use in the RO: Incident Learning System (RO-ILS). Three hundred event reports were randomly selected from an institutional ILS database and Safety in Radiation Oncology (SAFRON), an international ILS. The reports were split into 3 groups of 100 events each: low-risk institutional, high-risk institutional, and SAFRON. Three raters retrospectively analyzed each event for contributing factors using the American Association of Physicists in Medicine taxonomy. No events were described by a single causal factor (median, 7). The causal factor taxonomy was found to be applicable for all events, but 4 causal factors were not described in the taxonomy: linear accelerator failure (n = 3), hardware/equipment failure (n = 2), failure to follow through with a quality improvement intervention (n = 1), and workflow documentation was misleading (n = 1). The most common causal factor categories contributing to events were similar in all event types. The most common specific causal factor to contribute to events was a "slip causing physical error." Poor human factors engineering was the only causal factor found to contribute more frequently to high-risk institutional versus low-risk institutional events. The taxonomy in the study was found to be applicable for all events and may be useful in root cause analyses and future studies. Communication and human behaviors were the most common errors affecting all types of events. Poor human factors engineering was found to specifically contribute to high-risk more than low-risk institutional events, and may represent a strategy for reducing errors in all types of events. Copyright © 2017 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  12. Evaluation of the importance of time-frequency contributions to speech intelligibility in noise

    PubMed Central

    Yu, Chengzhu; Wójcicki, Kamil K.; Loizou, Philipos C.; Hansen, John H. L.; Johnson, Michael T.

    2014-01-01

    Recent studies on binary masking techniques make the assumption that each time-frequency (T-F) unit contributes an equal amount to the overall intelligibility of speech. The present study demonstrated that the importance of each T-F unit to speech intelligibility varies in accordance with speech content. Specifically, T-F units are categorized into two classes, speech-present T-F units and speech-absent T-F units. Results indicate that the importance of each speech-present T-F unit to speech intelligibility is highly related to the loudness of its target component, while the importance of each speech-absent T-F unit varies according to the loudness of its masker component. Two types of mask errors are also considered, which include miss and false alarm errors. Consistent with previous work, false alarm errors are shown to be more harmful to speech intelligibility than miss errors when the mixture signal-to-noise ratio (SNR) is below 0 dB. However, the relative importance between the two types of error is conditioned on the SNR level of the input speech signal. Based on these observations, a mask-based objective measure, the loudness weighted hit-false, is proposed for predicting speech intelligibility. The proposed objective measure shows significantly higher correlation with intelligibility compared to two existing mask-based objective measures. PMID:24815280

  13. Evaluation of Core Vocabulary Intervention for Treatment of Inconsistent Phonological Disorder: Three Treatment Case Studies

    ERIC Educational Resources Information Center

    McIntosh, Beth; Dodd, Barbara

    2009-01-01

    Children with unintelligible speech differ in severity, underlying deficit, type of surface error patterns and response to treatment. Detailed treatment case studies, evaluating specific intervention protocols for particular diagnostic groups, can identify best practice for children with speech disorder. Three treatment case studies evaluated the…

  14. Oranges, Posters, Ribbons, and Lemonade: Concrete Computational Strategies for Dividing Fractions

    ERIC Educational Resources Information Center

    Kribs-Zaleta, Christopher M.

    2008-01-01

    This article describes how sixth-grade students developed concrete models to solve division of fractions story problems. Students developed separate two-step procedures to solve measurement and partitive problems, drawing on invented procedures for division of whole numbers. Errors also tended to be specific to the type of division problem…

  15. 77 FR 18705 - Discontinuance of Form CO in Registration Practices

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-28

    ... ``likely to cause errors and lengthen the examination process.'' Attorney Joshua Kaufman, on the other hand... forms specific to various types of works--was also insufficient, maintained Mr. Kaufman, because these.... Kaufman's lone voice in favor of maintaining Form CO does not provide a strong reason for the Office to...

  16. Remediating Common Math Errors.

    ERIC Educational Resources Information Center

    Wagner, Rudolph F.

    1981-01-01

    Explanations and remediation suggestions for five types of mathematics errors due either to perceptual or cognitive difficulties are given. Error types include directionality problems, mirror writing, visually misperceived signs, diagnosed directionality problems, and mixed process errors. (CL)

  17. Heuristic errors in clinical reasoning.

    PubMed

    Rylander, Melanie; Guerrasio, Jeannette

    2016-08-01

    Errors in clinical reasoning contribute to patient morbidity and mortality. The purpose of this study was to determine the types of heuristic errors made by third-year medical students and first-year residents. This study surveyed approximately 150 clinical educators inquiring about the types of heuristic errors they observed in third-year medical students and first-year residents. Anchoring and premature closure were the two most common errors observed amongst third-year medical students and first-year residents. There was no difference in the types of errors observed in the two groups. Errors in clinical reasoning contribute to patient morbidity and mortality Clinical educators perceived that both third-year medical students and first-year residents committed similar heuristic errors, implying that additional medical knowledge and clinical experience do not affect the types of heuristic errors made. Further work is needed to help identify methods that can be used to reduce heuristic errors early in a clinician's education. © 2015 John Wiley & Sons Ltd.

  18. Accounting for measurement error in biomarker data and misclassification of subtypes in the analysis of tumor data

    PubMed Central

    Nevo, Daniel; Zucker, David M.; Tamimi, Rulla M.; Wang, Molin

    2017-01-01

    A common paradigm in dealing with heterogeneity across tumors in cancer analysis is to cluster the tumors into subtypes using marker data on the tumor, and then to analyze each of the clusters separately. A more specific target is to investigate the association between risk factors and specific subtypes and to use the results for personalized preventive treatment. This task is usually carried out in two steps–clustering and risk factor assessment. However, two sources of measurement error arise in these problems. The first is the measurement error in the biomarker values. The second is the misclassification error when assigning observations to clusters. We consider the case with a specified set of relevant markers and propose a unified single-likelihood approach for normally distributed biomarkers. As an alternative, we consider a two-step procedure with the tumor type misclassification error taken into account in the second-step risk factor analysis. We describe our method for binary data and also for survival analysis data using a modified version of the Cox model. We present asymptotic theory for the proposed estimators. Simulation results indicate that our methods significantly lower the bias with a small price being paid in terms of variance. We present an analysis of breast cancer data from the Nurses’ Health Study to demonstrate the utility of our method. PMID:27558651

  19. Analysis of error type and frequency in apraxia of speech among Portuguese speakers.

    PubMed

    Cera, Maysa Luchesi; Minett, Thaís Soares Cianciarullo; Ortiz, Karin Zazo

    2010-01-01

    Most studies characterizing errors in the speech of patients with apraxia involve English language. To analyze the types and frequency of errors produced by patients with apraxia of speech whose mother tongue was Brazilian Portuguese. 20 adults with apraxia of speech caused by stroke were assessed. The types of error committed by patients were analyzed both quantitatively and qualitatively, and frequencies compared. We observed the presence of substitution, omission, trial-and-error, repetition, self-correction, anticipation, addition, reiteration and metathesis, in descending order of frequency, respectively. Omission type errors were one of the most commonly occurring whereas addition errors were infrequent. These findings differed to those reported in English speaking patients, probably owing to differences in the methodologies used for classifying error types; the inclusion of speakers with apraxia secondary to aphasia; and the difference in the structure of Portuguese language to English in terms of syllable onset complexity and effect on motor control. The frequency of omission and addition errors observed differed to the frequency reported for speakers of English.

  20. Residents' numeric inputting error in computerized physician order entry prescription.

    PubMed

    Wu, Xue; Wu, Changxu; Zhang, Kan; Wei, Dong

    2016-04-01

    Computerized physician order entry (CPOE) system with embedded clinical decision support (CDS) can significantly reduce certain types of prescription error. However, prescription errors still occur. Various factors such as the numeric inputting methods in human computer interaction (HCI) produce different error rates and types, but has received relatively little attention. This study aimed to examine the effects of numeric inputting methods and urgency levels on numeric inputting errors of prescription, as well as categorize the types of errors. Thirty residents participated in four prescribing tasks in which two factors were manipulated: numeric inputting methods (numeric row in the main keyboard vs. numeric keypad) and urgency levels (urgent situation vs. non-urgent situation). Multiple aspects of participants' prescribing behavior were measured in sober prescribing situations. The results revealed that in urgent situations, participants were prone to make mistakes when using the numeric row in the main keyboard. With control of performance in the sober prescribing situation, the effects of the input methods disappeared, and urgency was found to play a significant role in the generalized linear model. Most errors were either omission or substitution types, but the proportion of transposition and intrusion error types were significantly higher than that of the previous research. Among numbers 3, 8, and 9, which were the less common digits used in prescription, the error rate was higher, which was a great risk to patient safety. Urgency played a more important role in CPOE numeric typing error-making than typing skills and typing habits. It was recommended that inputting with the numeric keypad had lower error rates in urgent situation. An alternative design could consider increasing the sensitivity of the keys with lower frequency of occurrence and decimals. To improve the usability of CPOE, numeric keyboard design and error detection could benefit from spatial incidence of errors found in this study. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. Ensemble-type numerical uncertainty information from single model integrations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rauser, Florian, E-mail: florian.rauser@mpimet.mpg.de; Marotzke, Jochem; Korn, Peter

    2015-07-01

    We suggest an algorithm that quantifies the discretization error of time-dependent physical quantities of interest (goals) for numerical models of geophysical fluid dynamics. The goal discretization error is estimated using a sum of weighted local discretization errors. The key feature of our algorithm is that these local discretization errors are interpreted as realizations of a random process. The random process is determined by the model and the flow state. From a class of local error random processes we select a suitable specific random process by integrating the model over a short time interval at different resolutions. The weights of themore » influences of the local discretization errors on the goal are modeled as goal sensitivities, which are calculated via automatic differentiation. The integration of the weighted realizations of local error random processes yields a posterior ensemble of goal approximations from a single run of the numerical model. From the posterior ensemble we derive the uncertainty information of the goal discretization error. This algorithm bypasses the requirement of detailed knowledge about the models discretization to generate numerical error estimates. The algorithm is evaluated for the spherical shallow-water equations. For two standard test cases we successfully estimate the error of regional potential energy, track its evolution, and compare it to standard ensemble techniques. The posterior ensemble shares linear-error-growth properties with ensembles of multiple model integrations when comparably perturbed. The posterior ensemble numerical error estimates are of comparable size as those of a stochastic physics ensemble.« less

  2. Correcting false memories: Errors must be noticed and replaced.

    PubMed

    Mullet, Hillary G; Marsh, Elizabeth J

    2016-04-01

    Memory can be unreliable. For example, after reading The new baby stayed awake all night, people often misremember that the new baby cried all night (Brewer, 1977); similarly, after hearing bed, rest, and tired, people often falsely remember that sleep was on the list (Roediger & McDermott, 1995). In general, such false memories are difficult to correct, persisting despite warnings and additional study opportunities. We argue that errors must first be detected to be corrected; consistent with this argument, two experiments showed that false memories were nearly eliminated when conditions facilitated comparisons between participants' errors and corrective feedback (e.g., immediate trial-by-trial feedback that allowed direct comparisons between their responses and the correct information). However, knowledge that they had made an error was insufficient; unless the feedback message also contained the correct answer, the rate of false memories remained relatively constant. On the one hand, there is nothing special about correcting false memories: simply labeling an error as "wrong" is also insufficient for correcting other memory errors, including misremembered facts or mistranslations. However, unlike these other types of errors--which often benefit from the spacing afforded by delayed feedback--false memories require a special consideration: Learners may fail to notice their errors unless the correction conditions specifically highlight them.

  3. Theory of mind in schizophrenia: error types and associations with symptoms.

    PubMed

    Fretland, Ragnhild A; Andersson, Stein; Sundet, Kjetil; Andreassen, Ole A; Melle, Ingrid; Vaskinn, Anja

    2015-03-01

    Social cognition is an important determinant of functioning in schizophrenia. However, how social cognition relates to the clinical symptoms of schizophrenia is still unclear. The aim of this study was to explore the relationship between a social cognition domain, Theory of Mind (ToM), and the clinical symptoms of schizophrenia. Specifically, we investigated the associations between three ToM error types; 1) "overmentalizing" 2) "reduced ToM and 3) "no ToM", and positive, negative and disorganized symptoms. Fifty-two participants with a diagnosis of schizophrenia or schizoaffective disorder were assessed with the Movie for the Assessment of Social Cognition (MASC), a video-based ToM measure. An empirically validated five-factor model of the Positive and Negative Syndrome Scale (PANSS) was used to assess clinical symptoms. There was a significant, small-moderate association between overmentalizing and positive symptoms (rho=.28, p=.04). Disorganized symptoms correlated at a trend level with "reduced ToM" (rho=.27, p=.05). There were no other significant correlations between ToM impairments and symptom levels. Positive/disorganized symptoms did not contribute significantly in explaining total ToM performance, whereas IQ did (B=.37, p=.01). Within the undermentalizing domain, participants performed more "reduced ToM" errors than "no ToM" errors. Overmentalizing was associated with positive symptoms. The undermentalizing error types were unrelated to symptoms, but "reduced ToM" was somewhat associated to disorganization. The higher number of "reduced ToM" responses suggests that schizophrenia is characterized by accuracy problems rather than a fundamental lack of mental state concept. The findings call for the use of more sensitive measures when investigating ToM in schizophrenia to avoid the "right/wrong ToM"-dichotomy. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Systematic evidence review of rates and burden of harm of intravenous admixture drug preparation errors in healthcare settings.

    PubMed

    Hedlund, Nancy; Beer, Idal; Hoppe-Tichy, Torsten; Trbovich, Patricia

    2017-12-28

    To examine published evidence on intravenous admixture preparation errors (IAPEs) in healthcare settings. Searches were conducted in three electronic databases (January 2005 to April 2017). Publications reporting rates of IAPEs and error types were reviewed and categorised into the following groups: component errors, dose/calculation errors, aseptic technique errors and composite errors. The methodological rigour of each study was assessed using the Hawker method. Of the 34 articles that met inclusion criteria, 28 reported the site of IAPEs: central pharmacies (n=8), nursing wards (n=14), both settings (n=4) and other sites (n=3). Using the Hawker criteria, 14% of the articles were of good quality, 74% were of fair quality and 12% were of poor quality. Error types and reported rates varied substantially, including wrong drug (~0% to 4.7%), wrong diluent solution (0% to 49.0%), wrong label (0% to 99.0%), wrong dose (0% to 32.6%), wrong concentration (0.3% to 88.6%), wrong diluent volume (0.06% to 49.0%) and inadequate aseptic technique (0% to 92.7%)%). Four studies directly compared incidence by preparation site and/or method, finding error incidence to be lower for doses prepared within a central pharmacy versus the nursing ward and lower for automated preparation versus manual preparation. Although eight studies (24%) reported ≥1 errors with the potential to cause patient harm, no study directly linked IAPE occurrences to specific adverse patient outcomes. The available data suggest a need to continue to optimise the intravenous preparation process, focus on improving preparation workflow, design and implement preventive strategies, train staff on optimal admixture protocols and implement standardisation. Future research should focus on the development of consistent error subtype definitions, standardised reporting methodology and reliable, reproducible methods to track and link risk factors with the burden of harm associated with these errors. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  5. Type I and Type II error concerns in fMRI research: re-balancing the scale

    PubMed Central

    Cunningham, William A.

    2009-01-01

    Statistical thresholding (i.e. P-values) in fMRI research has become increasingly conservative over the past decade in an attempt to diminish Type I errors (i.e. false alarms) to a level traditionally allowed in behavioral science research. In this article, we examine the unintended negative consequences of this single-minded devotion to Type I errors: increased Type II errors (i.e. missing true effects), a bias toward studying large rather than small effects, a bias toward observing sensory and motor processes rather than complex cognitive and affective processes and deficient meta-analyses. Power analyses indicate that the reductions in acceptable P-values over time are producing dramatic increases in the Type II error rate. Moreover, the push for a mapwide false discovery rate (FDR) of 0.05 is based on the assumption that this is the FDR in most behavioral research; however, this is an inaccurate assessment of the conventions in actual behavioral research. We report simulations demonstrating that combined intensity and cluster size thresholds such as P < 0.005 with a 10 voxel extent produce a desirable balance between Types I and II error rates. This joint threshold produces high but acceptable Type II error rates and produces a FDR that is comparable to the effective FDR in typical behavioral science articles (while a 20 voxel extent threshold produces an actual FDR of 0.05 with relatively common imaging parameters). We recommend a greater focus on replication and meta-analysis rather than emphasizing single studies as the unit of analysis for establishing scientific truth. From this perspective, Type I errors are self-erasing because they will not replicate, thus allowing for more lenient thresholding to avoid Type II errors. PMID:20035017

  6. Types of diagnostic errors in neurological emergencies in the emergency department.

    PubMed

    Dubosh, Nicole M; Edlow, Jonathan A; Lefton, Micah; Pope, Jennifer V

    2015-02-01

    Neurological emergencies often pose diagnostic challenges for emergency physicians because these patients often present with atypical symptoms and standard imaging tests are imperfect. Misdiagnosis occurs due to a variety of errors. These can be classified as knowledge gaps, cognitive errors, and systems-based errors. The goal of this study was to describe these errors through review of quality assurance (QA) records. This was a retrospective pilot study of patients with neurological emergency diagnoses that were missed or delayed at one urban, tertiary academic emergency department. Cases meeting inclusion criteria were identified through review of QA records. Three emergency physicians independently reviewed each case and determined the type of error that led to the misdiagnosis. Proportions, confidence intervals, and a reliability coefficient were calculated. During the study period, 1168 cases were reviewed. Forty-two cases were found to include a neurological misdiagnosis and twenty-nine were determined to be the result of an error. The distribution of error types was as follows: knowledge gap 45.2% (95% CI 29.2, 62.2), cognitive error 29.0% (95% CI 15.9, 46.8), and systems-based error 25.8% (95% CI 13.5, 43.5). Cerebellar strokes were the most common type of stroke misdiagnosed, accounting for 27.3% of missed strokes. All three error types contributed to the misdiagnosis of neurological emergencies. Misdiagnosis of cerebellar lesions and erroneous radiology resident interpretations of neuroimaging were the most common mistakes. Understanding the types of errors may enable emergency physicians to develop possible solutions and avoid them in the future.

  7. Rapid, Accurate, and Non-Invasive Measurement of Zebrafish Axial Length and Other Eye Dimensions Using SD-OCT Allows Longitudinal Analysis of Myopia and Emmetropization

    PubMed Central

    Collery, Ross F.; Veth, Kerry N.; Dubis, Adam M.; Carroll, Joseph; Link, Brian A.

    2014-01-01

    Refractive errors in vision can be caused by aberrant axial length of the eye, irregular corneal shape, or lens abnormalities. Causes of eye length overgrowth include multiple genetic loci, and visual parameters. We evaluate zebrafish as a potential animal model for studies of the genetic, cellular, and signaling basis of emmetropization and myopia. Axial length and other eye dimensions of zebrafish were measured using spectral domain-optical coherence tomography (SD-OCT). We used ocular lens and body metrics to normalize and compare eye size and relative refractive error (difference between observed retinal radial length and controls) in wild-type and lrp2 zebrafish. Zebrafish were dark-reared to assess effects of visual deprivation on eye size. Two relative measurements, ocular axial length to body length and axial length to lens diameter, were found to accurately normalize comparisons of eye sizes between different sized fish (R2 = 0.9548, R2 = 0.9921). Ray-traced focal lengths of wild-type zebrafish lenses were equal to their retinal radii, while lrp2 eyes had longer retinal radii than focal lengths. Both genetic mutation (lrp2) and environmental manipulation (dark-rearing) caused elongated eye axes. lrp2 mutants had relative refractive errors of −0.327 compared to wild-types, and dark-reared wild-type fish had relative refractive errors of −0.132 compared to light-reared siblings. Therefore, zebrafish eye anatomy (axial length, lens radius, retinal radius) can be rapidly and accurately measured by SD-OCT, facilitating longitudinal studies of regulated eye growth and emmetropization. Specifically, genes homologous to human myopia candidates may be modified, inactivated or overexpressed in zebrafish, and myopia-sensitizing conditions used to probe gene-environment interactions. Our studies provide foundation for such investigations into genetic contributions that control eye size and impact refractive errors. PMID:25334040

  8. Flexible and inflexible response components: a Stroop study with typewritten output.

    PubMed

    Damian, Markus F; Freeman, Norman H

    2008-05-01

    Two experiments were directed at investigating the relationship between response selection and execution in typewriting, and specifically the extent to which concurrent processing takes place. In a Stroop paradigm adapted from [Logan, G. D., & Zbrodoff, N. J. (1998). Stroop-type interference: Congruity effects in colour naming with typewritten responses. Journal of Experimental Psychology: Human Perception and Performance, 24, 978-992], participants typed the names of colour patches with incongruent, congruent, or neutral distractors presented at various stimulus-onset asynchronies. Experiment 1 showed Stroop interference and facilitation for initial keystroke latencies and errors, contrasting with response durations (a measure of response execution) being unaffected by Stroop manipulation. Experiment 2 showed that all three measures were responsive to time pressure; again, Stroop effects were confined to latencies and errors only. The observation that response duration is both flexible under time pressure and protected from response competition, may imply either that response execution is structurally segregated from earlier processing stages, or that encapsulation develops during the acquisition of typing skills.

  9. Heading control and the effects of display characteristics

    NASA Technical Reports Server (NTRS)

    Hinz, Stephanie J.; Bennett, C. Thomas

    1989-01-01

    The present study evaluates whether type of display (dot or wire frame) and direction of movement have an effect on a person's ability to actively maintain a specific heading angle. The questions addressed were: (1) does the magnitude of the heading angle errors differ in the two displays, (2) are some heading angles more difficult to maintain than others, and (3) does the magnitude of some errors differ as a function of display type and direction of movement. Differences between the results of this study and previous research are explained by methodological differences across the studies. Another factor that may be responsible for the difference between previous findings and those presented here is the type of graphics used to display the simulated motion. The physical characteristics of the display or the graphics engines that generate the scene have varied greatly across the studies. Analyses and diagrams are presented showing results of the study and the differences generated from previous studies on this subject.

  10. Type I error rates of rare single nucleotide variants are inflated in tests of association with non-normally distributed traits using simple linear regression methods.

    PubMed

    Schwantes-An, Tae-Hwi; Sung, Heejong; Sabourin, Jeremy A; Justice, Cristina M; Sorant, Alexa J M; Wilson, Alexander F

    2016-01-01

    In this study, the effects of (a) the minor allele frequency of the single nucleotide variant (SNV), (b) the degree of departure from normality of the trait, and (c) the position of the SNVs on type I error rates were investigated in the Genetic Analysis Workshop (GAW) 19 whole exome sequence data. To test the distribution of the type I error rate, 5 simulated traits were considered: standard normal and gamma distributed traits; 2 transformed versions of the gamma trait (log 10 and rank-based inverse normal transformations); and trait Q1 provided by GAW 19. Each trait was tested with 313,340 SNVs. Tests of association were performed with simple linear regression and average type I error rates were determined for minor allele frequency classes. Rare SNVs (minor allele frequency < 0.05) showed inflated type I error rates for non-normally distributed traits that increased as the minor allele frequency decreased. The inflation of average type I error rates increased as the significance threshold decreased. Normally distributed traits did not show inflated type I error rates with respect to the minor allele frequency for rare SNVs. There was no consistent effect of transformation on the uniformity of the distribution of the location of SNVs with a type I error.

  11. Comparing Errors in Medicaid Reporting across Surveys: Evidence to Date

    PubMed Central

    Call, Kathleen T; Davern, Michael E; Klerman, Jacob A; Lynch, Victoria

    2013-01-01

    Objective To synthesize evidence on the accuracy of Medicaid reporting across state and federal surveys. Data Sources All available validation studies. Study Design Compare results from existing research to understand variation in reporting across surveys. Data Collection Methods Synthesize all available studies validating survey reports of Medicaid coverage. Principal Findings Across all surveys, reporting some type of insurance coverage is better than reporting Medicaid specifically. Therefore, estimates of uninsurance are less biased than estimates of specific sources of coverage. The CPS stands out as being particularly inaccurate. Conclusions Measuring health insurance coverage is prone to some level of error, yet survey overstatements of uninsurance are modest in most surveys. Accounting for all forms of bias is complex. Researchers should consider adjusting estimates of Medicaid and uninsurance in surveys prone to high levels of misreporting. PMID:22816493

  12. Isolating the Effects of a Mobile Phone on the Usability and Safety of eHealth Software Applications.

    PubMed

    Borycki, Elizabeth M; Griffith, Janessa; Monkman, Helen; Reid-Haughian, Cheryl

    2017-01-01

    Mobile phones are used in conjunction with mobile eHealth software applications. These mobile software applications can be used to access, review and document clinical information. The objective of this research was to explore the relationship between mobile phones, usability and safety. Clinical simulations and semi-structured interviews were used to investigate this relationship. The findings revealed that mobile phones may lead to specific types of usability issues that may introduce some types of errors.

  13. The Relation Between Inflation in Type-I and Type-II Error Rate and Population Divergence in Genome-Wide Association Analysis of Multi-Ethnic Populations.

    PubMed

    Derks, E M; Zwinderman, A H; Gamazon, E R

    2017-05-01

    Population divergence impacts the degree of population stratification in Genome Wide Association Studies. We aim to: (i) investigate type-I error rate as a function of population divergence (F ST ) in multi-ethnic (admixed) populations; (ii) evaluate the statistical power and effect size estimates; and (iii) investigate the impact of population stratification on the results of gene-based analyses. Quantitative phenotypes were simulated. Type-I error rate was investigated for Single Nucleotide Polymorphisms (SNPs) with varying levels of F ST between the ancestral European and African populations. Type-II error rate was investigated for a SNP characterized by a high value of F ST . In all tests, genomic MDS components were included to correct for population stratification. Type-I and type-II error rate was adequately controlled in a population that included two distinct ethnic populations but not in admixed samples. Statistical power was reduced in the admixed samples. Gene-based tests showed no residual inflation in type-I error rate.

  14. Speech Characteristics and Intelligibility in Adults with Mild and Moderate Intellectual Disabilities

    PubMed Central

    Coppens-Hofman, Marjolein C.; Terband, Hayo; Snik, Ad F.M.; Maassen, Ben A.M.

    2017-01-01

    Purpose Adults with intellectual disabilities (ID) often show reduced speech intelligibility, which affects their social interaction skills. This study aims to establish the main predictors of this reduced intelligibility in order to ultimately optimise management. Method Spontaneous speech and picture naming tasks were recorded in 36 adults with mild or moderate ID. Twenty-five naïve listeners rated the intelligibility of the spontaneous speech samples. Performance on the picture-naming task was analysed by means of a phonological error analysis based on expert transcriptions. Results The transcription analyses showed that the phonemic and syllabic inventories of the speakers were complete. However, multiple errors at the phonemic and syllabic level were found. The frequencies of specific types of errors were related to intelligibility and quality ratings. Conclusions The development of the phonemic and syllabic repertoire appears to be completed in adults with mild-to-moderate ID. The charted speech difficulties can be interpreted to indicate speech motor control and planning difficulties. These findings may aid the development of diagnostic tests and speech therapies aimed at improving speech intelligibility in this specific group. PMID:28118637

  15. Patient-reported Outcomes for Assessment of Quality of Life in Refractive Error: A Systematic Review.

    PubMed

    Kandel, Himal; Khadka, Jyoti; Goggin, Michael; Pesudovs, Konrad

    2017-12-01

    This review has identified the best existing patient-reported outcome (PRO) instruments in refractive error. The article highlights the limitations of the existing instruments and discusses the way forward. A systematic review was conducted to identify the types of PROs used in refractive error, to determine the quality of the existing PRO instruments in terms of their psychometric properties, and to determine the limitations in the content of the existing PRO instruments. Articles describing a PRO instrument measuring 1 or more domains of quality of life in people with refractive error were identified by electronic searches on the MEDLINE, PubMed, Scopus, Web of Science, and Cochrane databases. The information on content development, psychometric properties, validity, reliability, and responsiveness of those PRO instruments was extracted from the selected articles. The analysis was done based on a comprehensive set of assessment criteria. One hundred forty-eight articles describing 47 PRO instruments in refractive error were included in the review. Most of the articles (99 [66.9%]) used refractive error-specific PRO instruments. The PRO instruments comprised 19 refractive, 12 vision but nonrefractive, and 16 generic PRO instruments. Only 17 PRO instruments were validated in refractive error populations; six of them were developed using Rasch analysis. None of the PRO instruments has items across all domains of quality of life. The Quality of Life Impact of Refractive Correction, the Quality of Vision, and the Contact Lens Impact on Quality of Life have comparatively better quality with some limitations, compared with the other PRO instruments. This review describes the PRO instruments and informs the choice of an appropriate measure in refractive error. We identified need of a comprehensive and scientifically robust refractive error-specific PRO instrument. Item banking and computer-adaptive testing system can be the way to provide such an instrument.

  16. Exploring the Phenotype of Phonological Reading Disability as a Function of the Phonological Deficit Severity: Evidence from the Error Analysis Paradigm in Arabic

    ERIC Educational Resources Information Center

    Taha, Haitham; Ibrahim, Raphiq; Khateb, Asaid

    2014-01-01

    The dominant error types were investigated as a function of phonological processing (PP) deficit severity in four groups of impaired readers. For this aim, an error analysis paradigm distinguishing between four error types was used. The findings revealed that the different types of impaired readers were characterized by differing predominant error…

  17. Error analysis of mathematical problems on TIMSS: A case of Indonesian secondary students

    NASA Astrophysics Data System (ADS)

    Priyani, H. A.; Ekawati, R.

    2018-01-01

    Indonesian students’ competence in solving mathematical problems is still considered as weak. It was pointed out by the results of international assessment such as TIMSS. This might be caused by various types of errors made. Hence, this study aimed at identifying students’ errors in solving mathematical problems in TIMSS in the topic of numbers that considered as the fundamental concept in Mathematics. This study applied descriptive qualitative analysis. The subject was three students with most errors in the test indicators who were taken from 34 students of 8th graders. Data was obtained through paper and pencil test and student’s’ interview. The error analysis indicated that in solving Applying level problem, the type of error that students made was operational errors. In addition, for reasoning level problem, there are three types of errors made such as conceptual errors, operational errors and principal errors. Meanwhile, analysis of the causes of students’ errors showed that students did not comprehend the mathematical problems given.

  18. What does "Diversity" Mean for Public Engagement in Science? A New Metric for Innovation Ecosystem Diversity.

    PubMed

    Özdemir, Vural; Springer, Simon

    2018-03-01

    Diversity is increasingly at stake in early 21st century. Diversity is often conceptualized across ethnicity, gender, socioeconomic status, sexual preference, and professional credentials, among other categories of difference. These are important and relevant considerations and yet, they are incomplete. Diversity also rests in the way we frame questions long before answers are sought. Such diversity in the framing (epistemology) of scientific and societal questions is important for they influence the types of data, results, and impacts produced by research. Errors in the framing of a research question, whether in technical science or social science, are known as type III errors, as opposed to the better known type I (false positives) and type II errors (false negatives). Kimball defined "error of the third kind" as giving the right answer to the wrong problem. Raiffa described the type III error as correctly solving the wrong problem. Type III errors are upstream or design flaws, often driven by unchecked human values and power, and can adversely impact an entire innovation ecosystem, waste money, time, careers, and precious resources by focusing on the wrong or incorrectly framed question and hypothesis. Decades may pass while technology experts, scientists, social scientists, funding agencies and management consultants continue to tackle questions that suffer from type III errors. We propose a new diversity metric, the Frame Diversity Index (FDI), based on the hitherto neglected diversities in knowledge framing. The FDI would be positively correlated with epistemological diversity and technological democracy, and inversely correlated with prevalence of type III errors in innovation ecosystems, consortia, and knowledge networks. We suggest that the FDI can usefully measure (and prevent) type III error risks in innovation ecosystems, and help broaden the concepts and practices of diversity and inclusion in science, technology, innovation and society.

  19. Tailoring a Human Reliability Analysis to Your Industry Needs

    NASA Technical Reports Server (NTRS)

    DeMott, D. L.

    2016-01-01

    Companies at risk of accidents caused by human error that result in catastrophic consequences include: airline industry mishaps, medical malpractice, medication mistakes, aerospace failures, major oil spills, transportation mishaps, power production failures and manufacturing facility incidents. Human Reliability Assessment (HRA) is used to analyze the inherent risk of human behavior or actions introducing errors into the operation of a system or process. These assessments can be used to identify where errors are most likely to arise and the potential risks involved if they do occur. Using the basic concepts of HRA, an evolving group of methodologies are used to meet various industry needs. Determining which methodology or combination of techniques will provide a quality human reliability assessment is a key element to developing effective strategies for understanding and dealing with risks caused by human errors. There are a number of concerns and difficulties in "tailoring" a Human Reliability Assessment (HRA) for different industries. Although a variety of HRA methodologies are available to analyze human error events, determining the most appropriate tools to provide the most useful results can depend on industry specific cultures and requirements. Methodology selection may be based on a variety of factors that include: 1) how people act and react in different industries, 2) expectations based on industry standards, 3) factors that influence how the human errors could occur such as tasks, tools, environment, workplace, support, training and procedure, 4) type and availability of data, 5) how the industry views risk & reliability, and 6) types of emergencies, contingencies and routine tasks. Other considerations for methodology selection should be based on what information is needed from the assessment. If the principal concern is determination of the primary risk factors contributing to the potential human error, a more detailed analysis method may be employed versus a requirement to provide a numerical value as part of a probabilistic risk assessment. Industries involved with humans operating large equipment or transport systems (ex. railroads or airlines) would have more need to address the man machine interface than medical workers administering medications. Human error occurs in every industry; in most cases the consequences are relatively benign and occasionally beneficial. In cases where the results can have disastrous consequences, the use of Human Reliability techniques to identify and classify the risk of human errors allows a company more opportunities to mitigate or eliminate these types of risks and prevent costly tragedies.

  20. Single-variant and multi-variant trend tests for genetic association with next-generation sequencing that are robust to sequencing error.

    PubMed

    Kim, Wonkuk; Londono, Douglas; Zhou, Lisheng; Xing, Jinchuan; Nato, Alejandro Q; Musolf, Anthony; Matise, Tara C; Finch, Stephen J; Gordon, Derek

    2012-01-01

    As with any new technology, next-generation sequencing (NGS) has potential advantages and potential challenges. One advantage is the identification of multiple causal variants for disease that might otherwise be missed by SNP-chip technology. One potential challenge is misclassification error (as with any emerging technology) and the issue of power loss due to multiple testing. Here, we develop an extension of the linear trend test for association that incorporates differential misclassification error and may be applied to any number of SNPs. We call the statistic the linear trend test allowing for error, applied to NGS, or LTTae,NGS. This statistic allows for differential misclassification. The observed data are phenotypes for unrelated cases and controls, coverage, and the number of putative causal variants for every individual at all SNPs. We simulate data considering multiple factors (disease mode of inheritance, genotype relative risk, causal variant frequency, sequence error rate in cases, sequence error rate in controls, number of loci, and others) and evaluate type I error rate and power for each vector of factor settings. We compare our results with two recently published NGS statistics. Also, we create a fictitious disease model based on downloaded 1000 Genomes data for 5 SNPs and 388 individuals, and apply our statistic to those data. We find that the LTTae,NGS maintains the correct type I error rate in all simulations (differential and non-differential error), while the other statistics show large inflation in type I error for lower coverage. Power for all three methods is approximately the same for all three statistics in the presence of non-differential error. Application of our statistic to the 1000 Genomes data suggests that, for the data downloaded, there is a 1.5% sequence misclassification rate over all SNPs. Finally, application of the multi-variant form of LTTae,NGS shows high power for a number of simulation settings, although it can have lower power than the corresponding single-variant simulation results, most probably due to our specification of multi-variant SNP correlation values. In conclusion, our LTTae,NGS addresses two key challenges with NGS disease studies; first, it allows for differential misclassification when computing the statistic; and second, it addresses the multiple-testing issue in that there is a multi-variant form of the statistic that has only one degree of freedom, and provides a single p value, no matter how many loci. Copyright © 2013 S. Karger AG, Basel.

  1. Single variant and multi-variant trend tests for genetic association with next generation sequencing that are robust to sequencing error

    PubMed Central

    Kim, Wonkuk; Londono, Douglas; Zhou, Lisheng; Xing, Jinchuan; Nato, Andrew; Musolf, Anthony; Matise, Tara C.; Finch, Stephen J.; Gordon, Derek

    2013-01-01

    As with any new technology, next generation sequencing (NGS) has potential advantages and potential challenges. One advantage is the identification of multiple causal variants for disease that might otherwise be missed by SNP-chip technology. One potential challenge is misclassification error (as with any emerging technology) and the issue of power loss due to multiple testing. Here, we develop an extension of the linear trend test for association that incorporates differential misclassification error and may be applied to any number of SNPs. We call the statistic the linear trend test allowing for error, applied to NGS, or LTTae,NGS. This statistic allows for differential misclassification. The observed data are phenotypes for unrelated cases and controls, coverage, and the number of putative causal variants for every individual at all SNPs. We simulate data considering multiple factors (disease mode of inheritance, genotype relative risk, causal variant frequency, sequence error rate in cases, sequence error rate in controls, number of loci, and others) and evaluate type I error rate and power for each vector of factor settings. We compare our results with two recently published NGS statistics. Also, we create a fictitious disease model, based on downloaded 1000 Genomes data for 5 SNPs and 388 individuals, and apply our statistic to that data. We find that the LTTae,NGS maintains the correct type I error rate in all simulations (differential and non-differential error), while the other statistics show large inflation in type I error for lower coverage. Power for all three methods is approximately the same for all three statistics in the presence of non-differential error. Application of our statistic to the 1000 Genomes data suggests that, for the data downloaded, there is a 1.5% sequence misclassification rate over all SNPs. Finally, application of the multi-variant form of LTTae,NGS shows high power for a number of simulation settings, although it can have lower power than the corresponding single variant simulation results, most probably due to our specification of multi-variant SNP correlation values. In conclusion, our LTTae,NGS addresses two key challenges with NGS disease studies; first, it allows for differential misclassification when computing the statistic; and second, it addresses the multiple-testing issue in that there is a multi-variant form of the statistic that has only one degree of freedom, and provides a single p-value, no matter how many loci. PMID:23594495

  2. Spectacle wearing in children randomised to ready-made or custom spectacles, and potential cost savings to programmes: study protocol for a randomised controlled trial.

    PubMed

    Morjaria, Priya; Murali, Kaushik; Evans, Jennifer; Gilbert, Clare

    2016-01-19

    Uncorrected refractive errors are the commonest cause of visual impairment in children, with myopia being the most frequent type. Myopia usually starts around 9 years of age and progresses throughout adolescence. Hyperopia usually affects younger children, and astigmatism affects all age groups. Many children have a combination of myopia and astigmatism. To correct refractive errors, the type and degree of refractive error are measured and appropriate corrective lenses prescribed and dispensed in the spectacle frame of choice. Custom spectacles (that is, with the correction specifically required for that individual) are required if astigmatism is present, and/or the refractive error differs between eyes. Spectacles without astigmatic correction and where the refractive error is the same in both eyes are straightforward to dispense. These are known as 'ready-made' spectacles. High-quality spectacles of this type can be produced in high volume at an extremely low cost. Although spectacle correction improves visual function, a high proportion of children do not wear their spectacles for a variety of reasons. The aim of this study is to compare spectacle wear at 3-4 months amongst school children aged 11 to 15 years who have significant, simple uncorrected refractive error randomised to ready-made or custom spectacles of equivalent quality, and to evaluate cost savings to programmes. The study will take place in urban and semi-urban government schools in Bangalore, India. The hypothesis is that similar proportions of children randomised to ready-made or custom spectacles will be wearing their spectacles at 3-4 months. The trial is a randomised, non-inferiority, double masked clinical trial of children with simple uncorrected refractive errors. After screening, children will be randomised to ready-made or custom spectacles. Children will choose their preferred frame design. After 3-4 months the children will be followed up to assess spectacle wear. Ready-made spectacles have benefits for providers as well as parents and children, as a wide range of prescriptions and frame types can be taken to schools and dispensed immediately. In contrast, custom spectacles have to be individually made up in optical laboratories, and taken back to the school and given to the correct child. ISRCTN14715120 (Controlled-Trials.com) Date registered: 04 February 2015.

  3. Stochastic characterization of phase detection algorithms in phase-shifting interferometry

    DOE PAGES

    Munteanu, Florin

    2016-11-01

    Phase-shifting interferometry (PSI) is the preferred non-contact method for profiling sub-nanometer surfaces. Based on monochromatic light interference, the method computes the surface profile from a set of interferograms collected at separate stepping positions. Errors in the estimated profile are introduced when these positions are not located correctly. In order to cope with this problem, various algorithms that minimize the effects of certain types of stepping errors (linear, sinusoidal, etc.) have been developed. Despite the relatively large number of algorithms suggested in the literature, there is no unified way of characterizing their performance when additional unaccounted random errors are present. Here,more » we suggest a procedure for quantifying the expected behavior of each algorithm in the presence of independent and identically distributed (i.i.d.) random stepping errors, which can occur in addition to the systematic errors for which the algorithm has been designed. As a result, the usefulness of this method derives from the fact that it can guide the selection of the best algorithm for specific measurement situations.« less

  4. The error in total error reduction.

    PubMed

    Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R

    2014-02-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modeling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. An observational study of drug administration errors in a Malaysian hospital (study of drug administration errors).

    PubMed

    Chua, S S; Tea, M H; Rahman, M H A

    2009-04-01

    Drug administration errors were the second most frequent type of medication errors, after prescribing errors but the latter were often intercepted hence, administration errors were more probably to reach the patients. Therefore, this study was conducted to determine the frequency and types of drug administration errors in a Malaysian hospital ward. This is a prospective study that involved direct, undisguised observations of drug administrations in a hospital ward. A researcher was stationed in the ward under study for 15 days to observe all drug administrations which were recorded in a data collection form and then compared with the drugs prescribed for the patient. A total of 1118 opportunities for errors were observed and 127 administrations had errors. This gave an error rate of 11.4 % [95% confidence interval (CI) 9.5-13.3]. If incorrect time errors were excluded, the error rate reduced to 8.7% (95% CI 7.1-10.4). The most common types of drug administration errors were incorrect time (25.2%), followed by incorrect technique of administration (16.3%) and unauthorized drug errors (14.1%). In terms of clinical significance, 10.4% of the administration errors were considered as potentially life-threatening. Intravenous routes were more likely to be associated with an administration error than oral routes (21.3% vs. 7.9%, P < 0.001). The study indicates that the frequency of drug administration errors in developing countries such as Malaysia is similar to that in the developed countries. Incorrect time errors were also the most common type of drug administration errors. A non-punitive system of reporting medication errors should be established to encourage more information to be documented so that risk management protocol could be developed and implemented.

  6. Quality Indicators in Laboratory Medicine: from theory to practice. Preliminary data from the IFCC Working Group Project "Laboratory Errors and Patient Safety".

    PubMed

    Sciacovelli, Laura; O'Kane, Maurice; Skaik, Younis Abdelwahab; Caciagli, Patrizio; Pellegrini, Cristina; Da Rin, Giorgio; Ivanov, Agnes; Ghys, Timothy; Plebani, Mario

    2011-05-01

    The adoption of Quality Indicators (QIs) has prompted the development of tools to measure and evaluate the quality and effectiveness of laboratory testing, first in the hospital setting and subsequently in ambulatory and other care settings. While Laboratory Medicine has an important role in the delivery of high-quality care, no consensus exists as yet on the use of QIs focussing on all steps of the laboratory total testing process (TTP), and further research in this area is required. In order to reduce errors in laboratory testing, the IFCC Working Group on "Laboratory Errors and Patient Safety" (WG-LEPS) developed a series of Quality Indicators, specifically designed for clinical laboratories. In the first phase of the project, specific QIs for key processes of the TTP were identified, including all the pre-, intra- and post-analytic steps. The overall aim of the project is to create a common reporting system for clinical laboratories based on standardized data collection, and to define state-of-the-art and Quality Specifications (QSs) for each QI independent of: a) the size of organization and type of activities; b) the complexity of processes undertaken; and c) different degree of knowledge and ability of the staff. The aim of the present paper is to report the results collected from participating laboratories from February 2008 to December 2009 and to identify preliminary QSs. The results demonstrate that a Model of Quality Indicators managed as an External Quality Assurance Program can serve as a tool to monitor and control the pre-, intra- and post-analytical activities. It might also allow clinical laboratories to identify risks that lead to errors resulting in patient harm: identification and design of practices that eliminate medical errors; the sharing of information and education of clinical and laboratory teams on practices that reduce or prevent errors; the monitoring and evaluation of improvement activities.

  7. Defining a roadmap for harmonizing quality indicators in Laboratory Medicine: a consensus statement on behalf of the IFCC Working Group "Laboratory Error and Patient Safety" and EFLM Task and Finish Group "Performance specifications for the extra-analytical phases".

    PubMed

    Sciacovelli, Laura; Panteghini, Mauro; Lippi, Giuseppe; Sumarac, Zorica; Cadamuro, Janne; Galoro, César Alex De Olivera; Pino Castro, Isabel Garcia Del; Shcolnik, Wilson; Plebani, Mario

    2017-08-28

    The improving quality of laboratory testing requires a deep understanding of the many vulnerable steps involved in the total examination process (TEP), along with the identification of a hierarchy of risks and challenges that need to be addressed. From this perspective, the Working Group "Laboratory Errors and Patient Safety" (WG-LEPS) of International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) is focusing its activity on implementation of an efficient tool for obtaining meaningful information on the risk of errors developing throughout the TEP, and for establishing reliable information about error frequencies and their distribution. More recently, the European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) has created the Task and Finish Group "Performance specifications for the extra-analytical phases" (TFG-PSEP) for defining performance specifications for extra-analytical phases. Both the IFCC and EFLM groups are working to provide laboratories with a system to evaluate their performances and recognize the critical aspects where improvement actions are needed. A Consensus Conference was organized in Padova, Italy, in 2016 in order to bring together all the experts and interested parties to achieve a consensus for effective harmonization of quality indicators (QIs). A general agreement was achieved and the main outcomes have been the release of a new version of model of quality indicators (MQI), the approval of a criterion for establishing performance specifications and the definition of the type of information that should be provided within the report to the clinical laboratories participating to the QIs project.

  8. The effect of toe marker placement error on joint kinematics and muscle forces using OpenSim gait simulation.

    PubMed

    Xu, Hang; Merryweather, Andrew; Bloswick, Donald; Mao, Qi; Wang, Tong

    2015-01-01

    Marker placement can be a significant source of error in biomechanical studies of human movement. The toe marker placement error is amplified by footwear since the toe marker placement on the shoe only relies on an approximation of underlying anatomical landmarks. Three total knee replacement subjects were recruited and three self-speed gait trials per subject were collected. The height variation between toe and heel markers of four types of footwear was evaluated from the results of joint kinematics and muscle forces using OpenSim. The reference condition was considered as the same vertical height of toe and heel markers. The results showed that the residual variances for joint kinematics had an approximately linear relationship with toe marker placement error for lower limb joints. Ankle dorsiflexion/plantarflexion is most sensitive to toe marker placement error. The influence of toe marker placement error is generally larger for hip flexion/extension and rotation than hip abduction/adduction and knee flexion/extension. The muscle forces responded to the residual variance of joint kinematics to various degrees based on the muscle function for specific joint kinematics. This study demonstrates the importance of evaluating marker error for joint kinematics and muscle forces when explaining relative clinical gait analysis and treatment intervention.

  9. Accounting for measurement error in biomarker data and misclassification of subtypes in the analysis of tumor data.

    PubMed

    Nevo, Daniel; Zucker, David M; Tamimi, Rulla M; Wang, Molin

    2016-12-30

    A common paradigm in dealing with heterogeneity across tumors in cancer analysis is to cluster the tumors into subtypes using marker data on the tumor, and then to analyze each of the clusters separately. A more specific target is to investigate the association between risk factors and specific subtypes and to use the results for personalized preventive treatment. This task is usually carried out in two steps-clustering and risk factor assessment. However, two sources of measurement error arise in these problems. The first is the measurement error in the biomarker values. The second is the misclassification error when assigning observations to clusters. We consider the case with a specified set of relevant markers and propose a unified single-likelihood approach for normally distributed biomarkers. As an alternative, we consider a two-step procedure with the tumor type misclassification error taken into account in the second-step risk factor analysis. We describe our method for binary data and also for survival analysis data using a modified version of the Cox model. We present asymptotic theory for the proposed estimators. Simulation results indicate that our methods significantly lower the bias with a small price being paid in terms of variance. We present an analysis of breast cancer data from the Nurses' Health Study to demonstrate the utility of our method. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  10. Impact of an antiretroviral stewardship strategy on medication error rates.

    PubMed

    Shea, Katherine M; Hobbs, Athena Lv; Shumake, Jason D; Templet, Derek J; Padilla-Tolentino, Eimeira; Mondy, Kristin E

    2018-05-02

    The impact of an antiretroviral stewardship strategy on medication error rates was evaluated. This single-center, retrospective, comparative cohort study included patients at least 18 years of age infected with human immunodeficiency virus (HIV) who were receiving antiretrovirals and admitted to the hospital. A multicomponent approach was developed and implemented and included modifications to the order-entry and verification system, pharmacist education, and a pharmacist-led antiretroviral therapy checklist. Pharmacists performed prospective audits using the checklist at the time of order verification. To assess the impact of the intervention, a retrospective review was performed before and after implementation to assess antiretroviral errors. Totals of 208 and 24 errors were identified before and after the intervention, respectively, resulting in a significant reduction in the overall error rate ( p < 0.001). In the postintervention group, significantly lower medication error rates were found in both patient admissions containing at least 1 medication error ( p < 0.001) and those with 2 or more errors ( p < 0.001). Significant reductions were also identified in each error type, including incorrect/incomplete medication regimen, incorrect dosing regimen, incorrect renal dose adjustment, incorrect administration, and the presence of a major drug-drug interaction. A regression tree selected ritonavir as the only specific medication that best predicted more errors preintervention ( p < 0.001); however, no antiretrovirals reliably predicted errors postintervention. An antiretroviral stewardship strategy for hospitalized HIV patients including prospective audit by staff pharmacists through use of an antiretroviral medication therapy checklist at the time of order verification decreased error rates. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  11. Benchmarking observational uncertainties for hydrology (Invited)

    NASA Astrophysics Data System (ADS)

    McMillan, H. K.; Krueger, T.; Freer, J. E.; Westerberg, I.

    2013-12-01

    There is a pressing need for authoritative and concise information on the expected error distributions and magnitudes in hydrological data, to understand its information content. Many studies have discussed how to incorporate uncertainty information into model calibration and implementation, and shown how model results can be biased if uncertainty is not appropriately characterised. However, it is not always possible (for example due to financial or time constraints) to make detailed studies of uncertainty for every research study. Instead, we propose that the hydrological community could benefit greatly from sharing information on likely uncertainty characteristics and the main factors that control the resulting magnitude. In this presentation, we review the current knowledge of uncertainty for a number of key hydrological variables: rainfall, flow and water quality (suspended solids, nitrogen, phosphorus). We collated information on the specifics of the data measurement (data type, temporal and spatial resolution), error characteristics measured (e.g. standard error, confidence bounds) and error magnitude. Our results were primarily split by data type. Rainfall uncertainty was controlled most strongly by spatial scale, flow uncertainty was controlled by flow state (low, high) and gauging method. Water quality presented a more complex picture with many component errors. For all variables, it was easy to find examples where relative error magnitude exceeded 40%. We discuss some of the recent developments in hydrology which increase the need for guidance on typical error magnitudes, in particular when doing comparative/regionalisation and multi-objective analysis. Increased sharing of data, comparisons between multiple catchments, and storage in national/international databases can mean that data-users are far removed from data collection, but require good uncertainty information to reduce bias in comparisons or catchment regionalisation studies. Recently it has become more common for hydrologists to use multiple data types and sources within a single study. This may be driven by complex water management questions which integrate water quantity, quality and ecology; or by recognition of the value of auxiliary data to understand hydrological processes. We discuss briefly the impact of data uncertainty on the increasingly popular use of diagnostic signatures for hydrological process understanding and model development.

  12. Error Analysis of Indonesian Junior High School Student in Solving Space and Shape Content PISA Problem Using Newman Procedure

    NASA Astrophysics Data System (ADS)

    Sumule, U.; Amin, S. M.; Fuad, Y.

    2018-01-01

    This study aims to determine the types and causes of errors, as well as efforts being attempted to overcome the mistakes made by junior high school students in completing PISA content space and shape. Two subjects were selected based on the mathematical ability test results with the most error, yet they are able to communicate orally and in writing. Two selected subjects then worked on the PISA ability test question and the subjects were interviewed to find out the type and cause of the error and then given a scaffolding based on the type of mistake made.The results of this study obtained the type of error that students do are comprehension and transformation error. The reasons are students was not able to identify the keywords in the question, write down what is known or given, specify formulas or device a plan. To overcome this error, students were given scaffolding. Scaffolding that given to overcome misunderstandings were reviewing and restructuring. While to overcome the transformation error, scaffolding given were reviewing, restructuring, explaining and developing representational tools. Teachers are advised to use scaffolding to resolve errors so that the students are able to avoid these errors.

  13. Data Mining on Numeric Error in Computerized Physician Order Entry System Prescriptions.

    PubMed

    Wu, Xue; Wu, Changxu

    2017-01-01

    This study revealed the numeric error patterns related to dosage when doctors prescribed in computerized physician order entry system. Error categories showed that the '6','7', and '9' key produced a higher incidence of errors in Numpad typing, while the '2','3', and '0' key produced a higher incidence of errors in main keyboard digit line typing. Errors categorized as omission and substitution were higher in prevalence than transposition and intrusion.

  14. A simulation study to quantify the impacts of exposure measurement error on air pollution health risk estimates in copollutant time-series models.

    PubMed

    Dionisio, Kathie L; Chang, Howard H; Baxter, Lisa K

    2016-11-25

    Exposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of air pollution and health. ZIP-code level estimates of exposure for six pollutants (CO, NO x , EC, PM 2.5 , SO 4 , O 3 ) from 1999 to 2002 in the Atlanta metropolitan area were used to calculate spatial, population (i.e. ambient versus personal), and total exposure measurement error. Empirically determined covariance of pollutant concentration pairs and the associated measurement errors were used to simulate true exposure (exposure without error) from observed exposure. Daily emergency department visits for respiratory diseases were simulated using a Poisson time-series model with a main pollutant RR = 1.05 per interquartile range, and a null association for the copollutant (RR = 1). Monte Carlo experiments were used to evaluate the impacts of correlated exposure errors of different copollutant pairs. Substantial attenuation of RRs due to exposure error was evident in nearly all copollutant pairs studied, ranging from 10 to 40% attenuation for spatial error, 3-85% for population error, and 31-85% for total error. When CO, NO x or EC is the main pollutant, we demonstrated the possibility of false positives, specifically identifying significant, positive associations for copollutants based on the estimated type I error rate. The impact of exposure error must be considered when interpreting results of copollutant epidemiologic models, due to the possibility of attenuation of main pollutant RRs and the increased probability of false positives when measurement error is present.

  15. 42 CFR 431.960 - Types of payment errors.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Estimating Improper Payments in Medicaid and CHIP § 431.960 Types of payment errors. (a) General rule. State or provider errors identified for the Medicaid and CHIP improper payments measurement under the... been paid by a third party but were inappropriately paid by Medicaid or CHIP. (v) Pricing errors. (vi...

  16. 42 CFR 431.960 - Types of payment errors.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Estimating Improper Payments in Medicaid and CHIP § 431.960 Types of payment errors. (a) General rule. State or provider errors identified for the Medicaid and CHIP improper payments measurement under the... been paid by a third party but were inappropriately paid by Medicaid or CHIP. (v) Pricing errors. (vi...

  17. 42 CFR 431.960 - Types of payment errors.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Estimating Improper Payments in Medicaid and CHIP § 431.960 Types of payment errors. (a) General rule. State or provider errors identified for the Medicaid and CHIP improper payments measurement under the... been paid by a third party but were inappropriately paid by Medicaid or CHIP. (v) Pricing errors. (vi...

  18. 42 CFR 431.960 - Types of payment errors.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Estimating Improper Payments in Medicaid and CHIP § 431.960 Types of payment errors. (a) General rule. State or provider errors identified for the Medicaid and CHIP improper payments measurement under the... been paid by a third party but were inappropriately paid by Medicaid or CHIP. (v) Pricing errors. (vi...

  19. Attitude errors arising from antenna/satellite altitude errors - Recognition and reduction

    NASA Technical Reports Server (NTRS)

    Godbey, T. W.; Lambert, R.; Milano, G.

    1972-01-01

    A review is presented of the three basic types of pulsed radar altimeter designs, as well as the source and form of altitude bias errors arising from antenna/satellite attitude errors in each design type. A quantitative comparison of the three systems was also made.

  20. Slow Learner Errors Analysis in Solving Fractions Problems in Inclusive Junior High School Class

    NASA Astrophysics Data System (ADS)

    Novitasari, N.; Lukito, A.; Ekawati, R.

    2018-01-01

    A slow learner whose IQ is between 71 and 89 will have difficulties in solving mathematics problems that often lead to errors. The errors could be analyzed to where the errors may occur and its type. This research is qualitative descriptive which aims to describe the locations, types, and causes of slow learner errors in the inclusive junior high school class in solving the fraction problem. The subject of this research is one slow learner of seventh-grade student which was selected through direct observation by the researcher and through discussion with mathematics teacher and special tutor which handles the slow learner students. Data collection methods used in this study are written tasks and semistructured interviews. The collected data was analyzed by Newman’s Error Analysis (NEA). Results show that there are four locations of errors, namely comprehension, transformation, process skills, and encoding errors. There are four types of errors, such as concept, principle, algorithm, and counting errors. The results of this error analysis will help teachers to identify the causes of the errors made by the slow learner.

  1. Offside Decisions by Expert Assistant Referees in Association Football: Perception and Recall of Spatial Positions in Complex Dynamic Events

    ERIC Educational Resources Information Center

    Gilis, Bart; Helsen, Werner; Catteeuw, Peter; Wagemans, Johan

    2008-01-01

    This study investigated the offside decision-making process in association football. The first aim was to capture the specific offside decision-making skills in complex dynamic events. Second, we analyzed the type of errors to investigate the factors leading to incorrect decisions. Federation Internationale de Football Association (FIFA; n = 29)…

  2. A comparison of FIA plot data derived from image pixels and image objects

    Treesearch

    Charles E. Werstak

    2012-01-01

    The use of Forest Inventory and Analysis (FIA) plot data for producing continuous and thematic maps of forest attributes (e.g., forest type, canopy cover, volume, and biomass) at the regional level from satellite imagery can be challenging due to differences in scale. Specifically, classification errors that may result from assumptions made between what the field data...

  3. Methods for Addressing Technology-induced Errors: The Current State.

    PubMed

    Borycki, E; Dexheimer, J W; Hullin Lucay Cossio, C; Gong, Y; Jensen, S; Kaipio, J; Kennebeck, S; Kirkendall, E; Kushniruk, A W; Kuziemsky, C; Marcilly, R; Röhrig, R; Saranto, K; Senathirajah, Y; Weber, J; Takeda, H

    2016-11-10

    The objectives of this paper are to review and discuss the methods that are being used internationally to report on, mitigate, and eliminate technology-induced errors. The IMIA Working Group for Health Informatics for Patient Safety worked together to review and synthesize some of the main methods and approaches associated with technology- induced error reporting, reduction, and mitigation. The work involved a review of the evidence-based literature as well as guideline publications specific to health informatics. The paper presents a rich overview of current approaches, issues, and methods associated with: (1) safe HIT design, (2) safe HIT implementation, (3) reporting on technology-induced errors, (4) technology-induced error analysis, and (5) health information technology (HIT) risk management. The work is based on research from around the world. Internationally, researchers have been developing methods that can be used to identify, report on, mitigate, and eliminate technology-induced errors. Although there remain issues and challenges associated with the methodologies, they have been shown to improve the quality and safety of HIT. Since the first publications documenting technology-induced errors in healthcare in 2005, we have seen in a short 10 years researchers develop ways of identifying and addressing these types of errors. We have also seen organizations begin to use these approaches. Knowledge has been translated into practice in a short ten years whereas the norm for other research areas is of 20 years.

  4. Precision of a CAD/CAM-engineered surgical template based on a facebow for orthognathic surgery: an experiment with a rapid prototyping maxillary model.

    PubMed

    Lee, Jae-Won; Lim, Se-Ho; Kim, Moon-Key; Kang, Sang-Hoon

    2015-12-01

    We examined the precision of a computer-aided design/computer-aided manufacturing-engineered, manufactured, facebow-based surgical guide template (facebow wafer) by comparing it with a bite splint-type orthognathic computer-aided design/computer-aided manufacturing-engineered surgical guide template (bite wafer). We used 24 rapid prototyping (RP) models of the craniofacial skeleton with maxillary deformities. Twelve RP models each were used for the facebow wafer group and the bite wafer group (experimental group). Experimental maxillary orthognathic surgery was performed on the RP models of both groups. Errors were evaluated through comparisons with surgical simulations. We measured the minimum distances from 3 planes of reference to determine the vertical, lateral, and anteroposterior errors at specific measurement points. The measured errors were compared between experimental groups using a t test. There were significant intergroup differences in the lateral error when we compared the absolute values of the 3-D linear distance, as well as vertical, lateral, and anteroposterior errors between experimental groups. The bite wafer method exhibited little lateral error overall and little error in the anterior tooth region. The facebow wafer method exhibited very little vertical error in the posterior molar region. The clinical precision of the facebow wafer method did not significantly exceed that of the bite wafer method. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Offside decisions by expert assistant referees in association football: Perception and recall of spatial positions in complex dynamic events.

    PubMed

    Gilis, Bart; Helsen, Werner; Catteeuw, Peter; Wagemans, Johan

    2008-03-01

    This study investigated the offside decision-making process in association football. The first aim was to capture the specific offside decision-making skills in complex dynamic events. Second, we analyzed the type of errors to investigate the factors leading to incorrect decisions. Fédération Internationale de Football Association (FIFA; n = 29) and Belgian elite (n = 28) assistant referees (ARs) assessed 64 computer-based offside situations. First, an expertise effect was found. The FIFA ARs assessed the trials more accurately than the Belgian ARs (76.4% vs. 67.5%). Second, regarding the type of error, all ARs clearly tended to raise their flag in doubtful situations. This observation could be explained by a perceptual bias associated with the flash-lag effect. Specifically, attackers were perceived ahead of their actual positions, and this tendency was stronger for the Belgian than for the FIFA ARs (11.0 vs. 8.4 pixels), in particular when the difficulty of the trials increased. Further experimentation is needed to examine whether video- and computer-based decision-making training is effective in improving the decision-making skills of ARs during the game. PsycINFO Database Record (c) 2008 APA, all rights reserved

  6. How scientific experiments are designed: Problem solving in a knowledge-rich, error-rich environment

    NASA Astrophysics Data System (ADS)

    Baker, Lisa M.

    While theory formation and the relation between theory and data has been investigated in many studies of scientific reasoning, researchers have focused less attention on reasoning about experimental design, even though the experimental design process makes up a large part of real-world scientists' reasoning. The goal of this thesis was to provide a cognitive account of the scientific experimental design process by analyzing experimental design as problem-solving behavior (Newell & Simon, 1972). Three specific issues were addressed: the effect of potential error on experimental design strategies, the role of prior knowledge in experimental design, and the effect of characteristics of the space of alternate hypotheses on alternate hypothesis testing. A two-pronged in vivo/in vitro research methodology was employed, in which transcripts of real-world scientific laboratory meetings were analyzed as well as undergraduate science and non-science majors' design of biology experiments in the psychology laboratory. It was found that scientists use a specific strategy to deal with the possibility of error in experimental findings: they include "known" control conditions in their experimental designs both to determine whether error is occurring and to identify sources of error. The known controls strategy had not been reported in earlier studies with science-like tasks, in which participants' responses to error had consisted of replicating experiments and discounting results. With respect to prior knowledge: scientists and undergraduate students drew on several types of knowledge when designing experiments, including theoretical knowledge, domain-specific knowledge of experimental techniques, and domain-general knowledge of experimental design strategies. Finally, undergraduate science students generated and tested alternates to their favored hypotheses when the space of alternate hypotheses was constrained and searchable. This result may help explain findings of confirmation bias in earlier studies using science-like tasks, in which characteristics of the alternate hypothesis space may have made it unfeasible for participants to generate and test alternate hypotheses. In general, scientists and science undergraduates were found to engage in a systematic experimental design process that responded to salient features of the problem environment, including the constant potential for experimental error, availability of alternate hypotheses, and access to both theoretical knowledge and knowledge of experimental techniques.

  7. Didn't You Run the Spell Checker? Effects of Type of Spelling Error and Use of a Spell Checker on Perceptions of the Author

    ERIC Educational Resources Information Center

    Figueredo, Lauren; Varnhagen, Connie K.

    2005-01-01

    We investigated expectations regarding a writer's responsibility to proofread text for spelling errors when using a word processor. Undergraduate students read an essay and completed a questionnaire regarding their perceptions of the author and the quality of the essay. They then manipulated type of spelling error (no error, homophone error,…

  8. Trends in Health Information Technology Safety: From Technology-Induced Errors to Current Approaches for Ensuring Technology Safety

    PubMed Central

    2013-01-01

    Objectives Health information technology (HIT) research findings suggested that new healthcare technologies could reduce some types of medical errors while at the same time introducing classes of medical errors (i.e., technology-induced errors). Technology-induced errors have their origins in HIT, and/or HIT contribute to their occurrence. The objective of this paper is to review current trends in the published literature on HIT safety. Methods A review and synthesis of the medical and life sciences literature focusing on the area of technology-induced error was conducted. Results There were four main trends in the literature on technology-induced error. The following areas were addressed in the literature: definitions of technology-induced errors; models, frameworks and evidence for understanding how technology-induced errors occur; a discussion of monitoring; and methods for preventing and learning about technology-induced errors. Conclusions The literature focusing on technology-induced errors continues to grow. Research has focused on the defining what an error is, models and frameworks used to understand these new types of errors, monitoring of such errors and methods that can be used to prevent these errors. More research will be needed to better understand and mitigate these types of errors. PMID:23882411

  9. Economic Value of Improved Accuracy for Self-Monitoring of Blood Glucose Devices for Type 1 and Type 2 Diabetes in England.

    PubMed

    McQueen, Robert Brett; Breton, Marc D; Craig, Joyce; Holmes, Hayden; Whittington, Melanie D; Ott, Markus A; Campbell, Jonathan D

    2018-04-01

    The objective was to model clinical and economic outcomes of self-monitoring blood glucose (SMBG) devices with varying error ranges and strip prices for type 1 and insulin-treated type 2 diabetes patients in England. We programmed a simulation model that included separate risk and complication estimates by type of diabetes and evidence from in silico modeling validated by the Food and Drug Administration. Changes in SMBG error were associated with changes in hemoglobin A1c (HbA1c) and separately, changes in hypoglycemia. Markov cohort simulation estimated clinical and economic outcomes. A SMBG device with 8.4% error and strip price of £0.30 (exceeding accuracy requirements by International Organization for Standardization [ISO] 15197:2013/EN ISO 15197:2015) was compared to a device with 15% error (accuracy meeting ISO 15197:2013/EN ISO 15197:2015) and price of £0.20. Outcomes were lifetime costs, quality-adjusted life years (QALYs) and incremental cost-effectiveness ratios (ICERs). With SMBG errors associated with changes in HbA1c only, the ICER was £3064 per QALY in type 1 diabetes and £264 668 per QALY in insulin-treated type 2 diabetes for an SMBG device with 8.4% versus 15% error. With SMBG errors associated with hypoglycemic events only, the device exceeding accuracy requirements was cost-saving and more effective in insulin-treated type 1 and type 2 diabetes. Investment in devices with higher strip prices but improved accuracy (less error) appears to be an efficient strategy for insulin-treated diabetes patients at high risk of severe hypoglycemia.

  10. Using raindrop size distributions from different types of disdrometer to establish weather radar algorithms

    NASA Astrophysics Data System (ADS)

    Baldini, Luca; Adirosi, Elisa; Roberto, Nicoletta; Vulpiani, Gianfranco; Russo, Fabio; Napolitano, Francesco

    2015-04-01

    Radar precipitation retrieval uses several relationships that parameterize precipitation properties (like rainfall rate and liquid water content and attenuation (in case of radars at attenuated frequencies such as those at C- and X- band) as a function of combinations of radar measurements. The uncertainty in such relations highly affects the uncertainty precipitation and attenuation estimates. A commonly used method to derive such relationships is to apply regression methods to precipitation measurements and radar observables simulated from datasets of drop size distributions (DSD) using microphysical and electromagnetic assumptions. DSD datasets are determined both by theoretical considerations (i.e. based on the assumption that the radar always samples raindrops whose sizes follow a gamma distribution) or from experimental measurements collected throughout the years by disdrometers. In principle, using long-term disdrometer measurements provide parameterizations more representative of a specific climatology. However, instrumental errors, specific of a disdrometer, can affect the results. In this study, different weather radar algorithms resulting from DSDs collected by diverse types of disdrometers, namely 2D video disdrometer, first and second generation of OTT Parsivel laser disdrometer, and Thies Clima laser disdrometer, in the area of Rome (Italy) are presented and discussed to establish at what extent dual-polarization radar algorithms derived from experimental DSD datasets are influenced by the different error structure of the different type of disdrometers used to collect the data.

  11. Defining near misses: towards a sharpened definition based on empirical data about error handling processes.

    PubMed

    Kessels-Habraken, Marieke; Van der Schaaf, Tjerk; De Jonge, Jan; Rutte, Christel

    2010-05-01

    Medical errors in health care still occur frequently. Unfortunately, errors cannot be completely prevented and 100% safety can never be achieved. Therefore, in addition to error reduction strategies, health care organisations could also implement strategies that promote timely error detection and correction. Reporting and analysis of so-called near misses - usually defined as incidents without adverse consequences for patients - are necessary to gather information about successful error recovery mechanisms. This study establishes the need for a clearer and more consistent definition of near misses to enable large-scale reporting and analysis in order to obtain such information. Qualitative incident reports and interviews were collected on four units of two Dutch general hospitals. Analysis of the 143 accompanying error handling processes demonstrated that different incident types each provide unique information about error handling. Specifically, error handling processes underlying incidents that did not reach the patient differed significantly from those of incidents that reached the patient, irrespective of harm, because of successful countermeasures that had been taken after error detection. We put forward two possible definitions of near misses and argue that, from a practical point of view, the optimal definition may be contingent on organisational context. Both proposed definitions could yield large-scale reporting of near misses. Subsequent analysis could enable health care organisations to improve the safety and quality of care proactively by (1) eliminating failure factors before real accidents occur, (2) enhancing their ability to intercept errors in time, and (3) improving their safety culture. Copyright 2010 Elsevier Ltd. All rights reserved.

  12. Effects of categorization method, regression type, and variable distribution on the inflation of Type-I error rate when categorizing a confounding variable.

    PubMed

    Barnwell-Ménard, Jean-Louis; Li, Qing; Cohen, Alan A

    2015-03-15

    The loss of signal associated with categorizing a continuous variable is well known, and previous studies have demonstrated that this can lead to an inflation of Type-I error when the categorized variable is a confounder in a regression analysis estimating the effect of an exposure on an outcome. However, it is not known how the Type-I error may vary under different circumstances, including logistic versus linear regression, different distributions of the confounder, and different categorization methods. Here, we analytically quantified the effect of categorization and then performed a series of 9600 Monte Carlo simulations to estimate the Type-I error inflation associated with categorization of a confounder under different regression scenarios. We show that Type-I error is unacceptably high (>10% in most scenarios and often 100%). The only exception was when the variable categorized was a continuous mixture proxy for a genuinely dichotomous latent variable, where both the continuous proxy and the categorized variable are error-ridden proxies for the dichotomous latent variable. As expected, error inflation was also higher with larger sample size, fewer categories, and stronger associations between the confounder and the exposure or outcome. We provide online tools that can help researchers estimate the potential error inflation and understand how serious a problem this is. Copyright © 2014 John Wiley & Sons, Ltd.

  13. Study of Synthetic Vision Systems (SVS) and Velocity-vector Based Command Augmentation System (V-CAS) on Pilot Performance

    NASA Technical Reports Server (NTRS)

    Liu, Dahai; Goodrich, Ken; Peak, Bob

    2006-01-01

    This study investigated the effects of synthetic vision system (SVS) concepts and advanced flight controls on single pilot performance (SPP). Specifically, we evaluated the benefits and interactions of two levels of terrain portrayal, guidance symbology, and control-system response type on SPP in the context of lower-landing minima (LLM) approaches. Performance measures consisted of flight technical error (FTE) and pilot perceived workload. In this study, pilot rating, control type, and guidance symbology were not found to significantly affect FTE or workload. It is likely that transfer from prior experience, limited scope of the evaluation task, specific implementation limitations, and limited sample size were major factors in obtaining these results.

  14. Neuropsychological analysis of a typewriting disturbance following cerebral damage.

    PubMed

    Boyle, M; Canter, G J

    1987-01-01

    Following a left CVA, a skilled professional typist sustained a disturbance of typing disproportionate to her handwriting disturbance. Typing errors were predominantly of the sequencing type, with spatial errors much less frequent, suggesting that the impairment was based on a relatively early (premotor) stage of processing. Depriving the subject of visual feedback during handwriting greatly increased her error rate. Similarly, interfering with auditory feedback during speech substantially reduced her self-correction of speech errors. These findings suggested that impaired ability to utilize somesthetic information--probably caused by the subject's parietal lobe lesion--may have been the basis of the typing disorder.

  15. The decline and fall of Type II error rates

    Treesearch

    Steve Verrill; Mark Durst

    2005-01-01

    For general linear models with normally distributed random errors, the probability of a Type II error decreases exponentially as a function of sample size. This potentially rapid decline reemphasizes the importance of performing power calculations.

  16. Restrictions on surgical resident shift length does not impact type of medical errors.

    PubMed

    Anderson, Jamie E; Goodman, Laura F; Jensen, Guy W; Salcedo, Edgardo S; Galante, Joseph M

    2017-05-15

    In 2011, resident duty hours were restricted in an attempt to improve patient safety and resident education. With the goal of reducing fatigue, shorter shift length leads to more patient handoffs, raising concerns about adverse effects on patient safety. This study seeks to determine whether differences in duty-hour restrictions influence types of errors made by residents. This is a nested retrospective cohort study at a surgery department in an academic medical center. During 2013-14, standard 2011 duty hours were in place for residents. In 2014-15, duty-hour restrictions at the study site were relaxed ("flexible") with no restrictions on shift length. We reviewed all morbidity and mortality submissions from July 1, 2013-June 30, 2015 and compared differences in types of errors between these periods. A total of 383 patients experienced adverse events, including 59 deaths (15.4%). Comparing standard versus flexible periods, there was no difference in mortality (15.7% versus 12.6%, P = 0.479) or complication rates (2.6% versus 2.5%, P = 0.696). There was no difference in types of errors between periods (P = 0.050-0.808). The most number of errors were due to cognitive failures (229, 59.6%), whereas the fewest number of errors were due to team failure (127, 33.2%). By subset, technical errors resulted in the highest number of errors (169, 44.1%). There were no differences between types of errors of cases that were nonelective, at night, or involving residents. Among adverse events reported in this departmental surgical morbidity and mortality, there were no differences in types of errors when resident duty hours were less restrictive. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Several Modified Goodness-Of-Fit Tests for the Cauchy Distribution with Unknown Scale and Location Parameters

    DTIC Science & Technology

    1994-03-01

    labels of a, which is called significance levels. The hypothesis tests are done based on the a levels . The maximum probabilities of making type II error...critical values at specific a levels . This procedure is done for each of the 50,000 samples. The number of the samples passing each test at those specific... a levels is counted. The ratio of the number of accepted samples to 50,000 gives the percentage point. Then, subtracting that value from one would

  18. Fully Convolutional Networks for Ground Classification from LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    Rizaldy, A.; Persello, C.; Gevaert, C. M.; Oude Elberink, S. J.

    2018-05-01

    Deep Learning has been massively used for image classification in recent years. The use of deep learning for ground classification from LIDAR point clouds has also been recently studied. However, point clouds need to be converted into an image in order to use Convolutional Neural Networks (CNNs). In state-of-the-art techniques, this conversion is slow because each point is converted into a separate image. This approach leads to highly redundant computation during conversion and classification. The goal of this study is to design a more efficient data conversion and ground classification. This goal is achieved by first converting the whole point cloud into a single image. The classification is then performed by a Fully Convolutional Network (FCN), a modified version of CNN designed for pixel-wise image classification. The proposed method is significantly faster than state-of-the-art techniques. On the ISPRS Filter Test dataset, it is 78 times faster for conversion and 16 times faster for classification. Our experimental analysis on the same dataset shows that the proposed method results in 5.22 % of total error, 4.10 % of type I error, and 15.07 % of type II error. Compared to the previous CNN-based technique and LAStools software, the proposed method reduces the total error and type I error (while type II error is slightly higher). The method was also tested on a very high point density LIDAR point clouds resulting in 4.02 % of total error, 2.15 % of type I error and 6.14 % of type II error.

  19. An automated workflow for patient-specific quality control of contour propagation

    NASA Astrophysics Data System (ADS)

    Beasley, William J.; McWilliam, Alan; Slevin, Nicholas J.; Mackay, Ranald I.; van Herk, Marcel

    2016-12-01

    Contour propagation is an essential component of adaptive radiotherapy, but current contour propagation algorithms are not yet sufficiently accurate to be used without manual supervision. Manual review of propagated contours is time-consuming, making routine implementation of real-time adaptive radiotherapy unrealistic. Automated methods of monitoring the performance of contour propagation algorithms are therefore required. We have developed an automated workflow for patient-specific quality control of contour propagation and validated it on a cohort of head and neck patients, on which parotids were outlined by two observers. Two types of error were simulated—mislabelling of contours and introducing noise in the scans before propagation. The ability of the workflow to correctly predict the occurrence of errors was tested, taking both sets of observer contours as ground truth, using receiver operator characteristic analysis. The area under the curve was 0.90 and 0.85 for the observers, indicating good ability to predict the occurrence of errors. This tool could potentially be used to identify propagated contours that are likely to be incorrect, acting as a flag for manual review of these contours. This would make contour propagation more efficient, facilitating the routine implementation of adaptive radiotherapy.

  20. Article Errors in the English Writing of Saudi EFL Preparatory Year Students

    ERIC Educational Resources Information Center

    Alhaisoni, Eid; Gaudel, Daya Ram; Al-Zuoud, Khalid M.

    2017-01-01

    This study aims at providing a comprehensive account of the types of errors produced by Saudi EFL students enrolled in the preparatory year programe in their use of articles, based on the Surface Structure Taxonomies (SST) of errors. The study describes the types, frequency and sources of the definite and indefinite article errors in writing…

  1. Identifying Novice Student Programming Misconceptions and Errors from Summative Assessments

    ERIC Educational Resources Information Center

    Veerasamy, Ashok Kumar; D'Souza, Daryl; Laakso, Mikko-Jussi

    2016-01-01

    This article presents a study aimed at examining the novice student answers in an introductory programming final e-exam to identify misconceptions and types of errors. Our study used the Delphi concept inventory to identify student misconceptions and skill, rule, and knowledge-based errors approach to identify the types of errors made by novices…

  2. False Positives in Multiple Regression: Unanticipated Consequences of Measurement Error in the Predictor Variables

    ERIC Educational Resources Information Center

    Shear, Benjamin R.; Zumbo, Bruno D.

    2013-01-01

    Type I error rates in multiple regression, and hence the chance for false positive research findings, can be drastically inflated when multiple regression models are used to analyze data that contain random measurement error. This article shows the potential for inflated Type I error rates in commonly encountered scenarios and provides new…

  3. Understanding Problem-Solving Errors by Students with Learning Disabilities in Standards-Based and Traditional Curricula

    ERIC Educational Resources Information Center

    Bouck, Emily C.; Bouck, Mary K.; Joshi, Gauri S.; Johnson, Linley

    2016-01-01

    Students with learning disabilities struggle with word problems in mathematics classes. Understanding the type of errors students make when working through such mathematical problems can further describe student performance and highlight student difficulties. Through the use of error codes, researchers analyzed the type of errors made by 14 sixth…

  4. Adaptive fuzzy logic controller with direct action type structures for InnoSAT attitude control system

    NASA Astrophysics Data System (ADS)

    Bakri, F. A.; Mashor, M. Y.; Sharun, S. M.; Bibi Sarpinah, S. N.; Abu Bakar, Z.

    2016-10-01

    This study proposes an adaptive fuzzy controller for attitude control system (ACS) of Innovative Satellite (InnoSAT) based on direct action type structure. In order to study new methods used in satellite attitude control, this paper presents three structures of controllers: Fuzzy PI, Fuzzy PD and conventional Fuzzy PID. The objective of this work is to compare the time response and tracking performance among the three different structures of controllers. The parameters of controller were tuned on-line by adjustment mechanism, which was an approach similar to a PID error that could minimize errors between actual and model reference output. This paper also presents a Model References Adaptive Control (MRAC) as a control scheme to control time varying systems where the performance specifications were given in terms of the reference model. All the controllers were tested using InnoSAT system under some operating conditions such as disturbance, varying gain, measurement noise and time delay. In conclusion, among all considered DA-type structures, AFPID controller was observed as the best structure since it outperformed other controllers in most conditions.

  5. An Explicit Linear Filtering Solution for the Optimization of Guidance Systems with Statistical Inputs

    NASA Technical Reports Server (NTRS)

    Stewart, Elwood C.

    1961-01-01

    The determination of optimum filtering characteristics for guidance system design is generally a tedious process which cannot usually be carried out in general terms. In this report a simple explicit solution is given which is applicable to many different types of problems. It is shown to be applicable to problems which involve optimization of constant-coefficient guidance systems and time-varying homing type systems for several stationary and nonstationary inputs. The solution is also applicable to off-design performance, that is, the evaluation of system performance for inputs for which the system was not specifically optimized. The solution is given in generalized form in terms of the minimum theoretical error, the optimum transfer functions, and the optimum transient response. The effects of input signal, contaminating noise, and limitations on the response are included. From the results given, it is possible in an interception problem, for example, to rapidly assess the effects on minimum theoretical error of such factors as target noise and missile acceleration. It is also possible to answer important questions regarding the effect of type of target maneuver on optimum performance.

  6. Do calculation errors by nurses cause medication errors in clinical practice? A literature review.

    PubMed

    Wright, Kerri

    2010-01-01

    This review aims to examine the literature available to ascertain whether medication errors in clinical practice are the result of nurses' miscalculating drug dosages. The research studies highlighting poor calculation skills of nurses and student nurses have been tested using written drug calculation tests in formal classroom settings [Kapborg, I., 1994. Calculation and administration of drug dosage by Swedish nurses, student nurses and physicians. International Journal for Quality in Health Care 6(4): 389 -395; Hutton, M., 1998. Nursing Mathematics: the importance of application Nursing Standard 13(11): 35-38; Weeks, K., Lynne, P., Torrance, C., 2000. Written drug dosage errors made by students: the threat to clinical effectiveness and the need for a new approach. Clinical Effectiveness in Nursing 4, 20-29]; Wright, K., 2004. Investigation to find strategies to improve student nurses' maths skills. British Journal Nursing 13(21) 1280-1287; Wright, K., 2005. An exploration into the most effective way to teach drug calculation skills to nursing students. Nurse Education Today 25, 430-436], but there have been no reviews of the literature on medication errors in practice that specifically look to see whether the medication errors are caused by nurses' poor calculation skills. The databases Medline, CINAHL, British Nursing Index (BNI), Journal of American Medical Association (JAMA) and Archives and Cochrane reviews were searched for research studies or systematic reviews which reported on the incidence or causes of drug errors in clinical practice. In total 33 articles met the criteria for this review. There were no studies that examined nurses' drug calculation errors in practice. As a result studies and systematic reviews that investigated the types and causes of drug errors were examined to establish whether miscalculations by nurses were the causes of errors. The review found insufficient evidence to suggest that medication errors are caused by nurses' poor calculation skills. Of the 33 studies reviewed only five articles specifically recorded information relating to calculation errors and only two of these detected errors using the direct observational approach. The literature suggests that there are other more pressing aspects of nurses' preparation and administration of medications which are contributing to medication errors in practice that require more urgent attention and calls into question the current focus on calculation and numeracy skills of pre registration and qualified nurses (NMC 2008). However, more research is required into the calculation errors in practice. In particular there is a need for a direct observational study on paediatric nurses as there are presently none examining this area of practice.

  7. Efficacy of monitoring and empirical predictive modeling at improving public health protection at Chicago beaches

    USGS Publications Warehouse

    Nevers, Meredith B.; Whitman, Richard L.

    2011-01-01

    Efforts to improve public health protection in recreational swimming waters have focused on obtaining real-time estimates of water quality. Current monitoring techniques rely on the time-intensive culturing of fecal indicator bacteria (FIB) from water samples, but rapidly changing FIB concentrations result in management errors that lead to the public being exposed to high FIB concentrations (type II error) or beaches being closed despite acceptable water quality (type I error). Empirical predictive models may provide a rapid solution, but their effectiveness at improving health protection has not been adequately assessed. We sought to determine if emerging monitoring approaches could effectively reduce risk of illness exposure by minimizing management errors. We examined four monitoring approaches (inactive, current protocol, a single predictive model for all beaches, and individual models for each beach) with increasing refinement at 14 Chicago beaches using historical monitoring and hydrometeorological data and compared management outcomes using different standards for decision-making. Predictability (R2) of FIB concentration improved with model refinement at all beaches but one. Predictive models did not always reduce the number of management errors and therefore the overall illness burden. Use of a Chicago-specific single-sample standard-rather than the default 235 E. coli CFU/100 ml widely used-together with predictive modeling resulted in the greatest number of open beach days without any increase in public health risk. These results emphasize that emerging monitoring approaches such as empirical models are not equally applicable at all beaches, and combining monitoring approaches may expand beach access.

  8. Multiple hypotheses testing based on ordered p values--a historical survey with applications to medical research.

    PubMed

    Hommel, Gerhard; Bretz, Frank; Maurer, Willi

    2011-07-01

    Global tests and multiple test procedures are often based on ordered p values. Such procedures are available for arbitrary dependence structures as well as for specific dependence assumptions of the test statistics. Most of these procedures have been considered as global tests. Multiple test procedures can be obtained by applying the closure principle in order to control the familywise error rate, or by using the false discovery rate as a criterion for type I error rate control. We provide an overview and present examples showing the importance of these procedures in medical research. Finally, we discuss modifications when different weights for the hypotheses of interest are chosen.

  9. Physical security and cyber security issues and human error prevention for 3D printed objects: detecting the use of an incorrect printing material

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2017-06-01

    A wide variety of characteristics of 3D printed objects have been linked to impaired structural integrity and use-efficacy. The printing material can also have a significant impact on the quality, utility and safety characteristics of a 3D printed object. Material issues can be created by vendor issues, physical security issues and human error. This paper presents and evaluates a system that can be used to detect incorrect material use in a 3D printer, using visible light imaging. Specifically, it assesses the ability to ascertain the difference between materials of different color and different types of material with similar coloration.

  10. Disclosure of Medical Errors: What Factors Influence How Patients Respond?

    PubMed Central

    Mazor, Kathleen M; Reed, George W; Yood, Robert A; Fischer, Melissa A; Baril, Joann; Gurwitz, Jerry H

    2006-01-01

    BACKGROUND Disclosure of medical errors is encouraged, but research on how patients respond to specific practices is limited. OBJECTIVE This study sought to determine whether full disclosure, an existing positive physician-patient relationship, an offer to waive associated costs, and the severity of the clinical outcome influenced patients' responses to medical errors. PARTICIPANTS Four hundred and seven health plan members participated in a randomized experiment in which they viewed video depictions of medical error and disclosure. DESIGN Subjects were randomly assigned to experimental condition. Conditions varied in type of medication error, level of disclosure, reference to a prior positive physician-patient relationship, an offer to waive costs, and clinical outcome. MEASURES Self-reported likelihood of changing physicians and of seeking legal advice; satisfaction, trust, and emotional response. RESULTS Nondisclosure increased the likelihood of changing physicians, and reduced satisfaction and trust in both error conditions. Nondisclosure increased the likelihood of seeking legal advice and was associated with a more negative emotional response in the missed allergy error condition, but did not have a statistically significant impact on seeking legal advice or emotional response in the monitoring error condition. Neither the existence of a positive relationship nor an offer to waive costs had a statistically significant impact. CONCLUSIONS This study provides evidence that full disclosure is likely to have a positive effect or no effect on how patients respond to medical errors. The clinical outcome also influences patients' responses. The impact of an existing positive physician-patient relationship, or of waiving costs associated with the error remains uncertain. PMID:16808770

  11. Error analysis and correction of lever-type stylus profilometer based on Nelder-Mead Simplex method

    NASA Astrophysics Data System (ADS)

    Hu, Chunbing; Chang, Suping; Li, Bo; Wang, Junwei; Zhang, Zhongyu

    2017-10-01

    Due to the high measurement accuracy and wide range of applications, lever-type stylus profilometry is commonly used in industrial research areas. However, the error caused by the lever structure has a great influence on the profile measurement, thus this paper analyzes the error of high-precision large-range lever-type stylus profilometry. The errors are corrected by the Nelder-Mead Simplex method, and the results are verified by the spherical surface calibration. It can be seen that this method can effectively reduce the measurement error and improve the accuracy of the stylus profilometry in large-scale measurement.

  12. The nature of articulation errors in Egyptian Arabic-speaking children with velopharyngeal insufficiency due to cleft palate.

    PubMed

    Abou-Elsaad, Tamer; Baz, Hemmat; Afsah, Omayma; Mansy, Alzahraa

    2015-09-01

    Even with early surgical repair, the majority of cleft palate children demonstrate articulation errors and have typical cleft palate speech. Was to determine the nature of articulation errors of Arabic consonants in Egyptian Arabic-speaking children with velopharyngeal insufficiency (VPI). Thirty Egyptian Arabic-speaking children with VPI due to cleft palate (whether primary repaired or secondary repaired) were studied. Auditory perceptual assessment (APA) of children speech was conducted. Nasopharyngoscopy was done to assess the velopharyngeal port (VPP) movements while the child was repeating speech tasks. Mansoura Arabic Articulation test (MAAT) was performed to analyze the consonants articulation of these children. The most frequent type of articulatory errors observed was substitution, more specifically, backing. Pharyngealization of anterior fricatives was the most frequent substitution, especially for the /s/ sound. The most frequent substituting sounds for other sounds were /ʔ/ followed by /k/ and /n/ sounds. Significant correlations were found between the degrees of the open nasality and VPP closure and the articulation errors. On the other hand, the sounds (/ʔ/,/ħ/,/ʕ/,/n/,/w/,/j/) were normally articulated in all studied group. The determination of articulation errors in VPI children could guide the therapists for designing appropriate speech therapy programs for these cases. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  13. Refractive errors.

    PubMed

    Schiefer, Ulrich; Kraus, Christina; Baumbach, Peter; Ungewiß, Judith; Michels, Ralf

    2016-10-14

    All over the world, refractive errors are among the most frequently occuring treatable distur - bances of visual function. Ametropias have a prevalence of nearly 70% among adults in Germany and are thus of great epidemiologic and socio-economic relevance. In the light of their own clinical experience, the authors review pertinent articles retrieved by a selective literature search employing the terms "ametropia, "anisometropia," "refraction," "visual acuity," and epidemiology." In 2011, only 31% of persons over age 16 in Germany did not use any kind of visual aid; 63.4% wore eyeglasses and 5.3% wore contact lenses. Refractive errors were the most common reason for consulting an ophthalmologist, accounting for 21.1% of all outpatient visits. A pinhole aperture (stenopeic slit) is a suitable instrument for the basic diagnostic evaluation of impaired visual function due to optical factors. Spherical refractive errors (myopia and hyperopia), cylindrical refractive errors (astigmatism), unequal refractive errors in the two eyes (anisometropia), and the typical optical disturbance of old age (presbyopia) cause specific functional limitations and can be detected by a physician who does not need to be an ophthalmologist. Simple functional tests can be used in everyday clinical practice to determine quickly, easily, and safely whether the patient is suffering from a benign and easily correctable type of visual impairment, or whether there are other, more serious underlying causes.

  14. Addressing Common Student Technical Errors in Field Data Collection: An Analysis of a Citizen-Science Monitoring Project.

    PubMed

    Philippoff, Joanna; Baumgartner, Erin

    2016-03-01

    The scientific value of citizen-science programs is limited when the data gathered are inconsistent, erroneous, or otherwise unusable. Long-term monitoring studies, such as Our Project In Hawai'i's Intertidal (OPIHI), have clear and consistent procedures and are thus a good model for evaluating the quality of participant data. The purpose of this study was to examine the kinds of errors made by student researchers during OPIHI data collection and factors that increase or decrease the likelihood of these errors. Twenty-four different types of errors were grouped into four broad error categories: missing data, sloppiness, methodological errors, and misidentification errors. "Sloppiness" was the most prevalent error type. Error rates decreased with field trip experience and student age. We suggest strategies to reduce data collection errors applicable to many types of citizen-science projects including emphasizing neat data collection, explicitly addressing and discussing the problems of falsifying data, emphasizing the importance of using standard scientific vocabulary, and giving participants multiple opportunities to practice to build their data collection techniques and skills.

  15. The Effects of Non-Normality on Type III Error for Comparing Independent Means

    ERIC Educational Resources Information Center

    Mendes, Mehmet

    2007-01-01

    The major objective of this study was to investigate the effects of non-normality on Type III error rates for ANOVA F its three commonly recommended parametric counterparts namely Welch, Brown-Forsythe, and Alexander-Govern test. Therefore these tests were compared in terms of Type III error rates across the variety of population distributions,…

  16. Item Discrimination and Type I Error in the Detection of Differential Item Functioning

    ERIC Educational Resources Information Center

    Li, Yanju; Brooks, Gordon P.; Johanson, George A.

    2012-01-01

    In 2009, DeMars stated that when impact exists there will be Type I error inflation, especially with larger sample sizes and larger discrimination parameters for items. One purpose of this study is to present the patterns of Type I error rates using Mantel-Haenszel (MH) and logistic regression (LR) procedures when the mean ability between the…

  17. Improving the prediction of going concern of Taiwanese listed companies using a hybrid of LASSO with data mining techniques.

    PubMed

    Goo, Yeung-Ja James; Chi, Der-Jang; Shen, Zong-De

    2016-01-01

    The purpose of this study is to establish rigorous and reliable going concern doubt (GCD) prediction models. This study first uses the least absolute shrinkage and selection operator (LASSO) to select variables and then applies data mining techniques to establish prediction models, such as neural network (NN), classification and regression tree (CART), and support vector machine (SVM). The samples of this study include 48 GCD listed companies and 124 NGCD (non-GCD) listed companies from 2002 to 2013 in the TEJ database. We conduct fivefold cross validation in order to identify the prediction accuracy. According to the empirical results, the prediction accuracy of the LASSO-NN model is 88.96 % (Type I error rate is 12.22 %; Type II error rate is 7.50 %), the prediction accuracy of the LASSO-CART model is 88.75 % (Type I error rate is 13.61 %; Type II error rate is 14.17 %), and the prediction accuracy of the LASSO-SVM model is 89.79 % (Type I error rate is 10.00 %; Type II error rate is 15.83 %).

  18. Updating of aversive memories after temporal error detection is differentially modulated by mTOR across development

    PubMed Central

    Tallot, Lucille; Diaz-Mataix, Lorenzo; Perry, Rosemarie E.; Wood, Kira; LeDoux, Joseph E.; Mouly, Anne-Marie; Sullivan, Regina M.; Doyère, Valérie

    2017-01-01

    The updating of a memory is triggered whenever it is reactivated and a mismatch from what is expected (i.e., prediction error) is detected, a process that can be unraveled through the memory's sensitivity to protein synthesis inhibitors (i.e., reconsolidation). As noted in previous studies, in Pavlovian threat/aversive conditioning in adult rats, prediction error detection and its associated protein synthesis-dependent reconsolidation can be triggered by reactivating the memory with the conditioned stimulus (CS), but without the unconditioned stimulus (US), or by presenting a CS–US pairing with a different CS–US interval than during the initial learning. Whether similar mechanisms underlie memory updating in the young is not known. Using similar paradigms with rapamycin (an mTORC1 inhibitor), we show that preweaning rats (PN18–20) do form a long-term memory of the CS–US interval, and detect a 10-sec versus 30-sec temporal prediction error. However, the resulting updating/reconsolidation processes become adult-like after adolescence (PN30–40). Our results thus show that while temporal prediction error detection exists in preweaning rats, specific infant-type mechanisms are at play for associative learning and memory. PMID:28202715

  19. Error analysis of speed of sound reconstruction in ultrasound limited angle transmission tomography.

    PubMed

    Jintamethasawat, Rungroj; Lee, Won-Mean; Carson, Paul L; Hooi, Fong Ming; Fowlkes, J Brian; Goodsitt, Mitchell M; Sampson, Richard; Wenisch, Thomas F; Wei, Siyuan; Zhou, Jian; Chakrabarti, Chaitali; Kripfgans, Oliver D

    2018-04-07

    We have investigated limited angle transmission tomography to estimate speed of sound (SOS) distributions for breast cancer detection. That requires both accurate delineations of major tissues, in this case by segmentation of prior B-mode images, and calibration of the relative positions of the opposed transducers. Experimental sensitivity evaluation of the reconstructions with respect to segmentation and calibration errors is difficult with our current system. Therefore, parametric studies of SOS errors in our bent-ray reconstructions were simulated. They included mis-segmentation of an object of interest or a nearby object, and miscalibration of relative transducer positions in 3D. Close correspondence of reconstruction accuracy was verified in the simplest case, a cylindrical object in homogeneous background with induced segmentation and calibration inaccuracies. Simulated mis-segmentation in object size and lateral location produced maximum SOS errors of 6.3% within 10 mm diameter change and 9.1% within 5 mm shift, respectively. Modest errors in assumed transducer separation produced the maximum SOS error from miscalibrations (57.3% within 5 mm shift), still, correction of this type of error can easily be achieved in the clinic. This study should aid in designing adequate transducer mounts and calibration procedures, and in specification of B-mode image quality and segmentation algorithms for limited angle transmission tomography relying on ray tracing algorithms. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Method for Expressing Clinical and Statistical Significance of Ocular and Corneal Wavefront Error Aberrations

    PubMed Central

    Smolek, Michael K.

    2011-01-01

    Purpose The significance of ocular or corneal aberrations may be subject to misinterpretation whenever eyes with different pupil sizes or the application of different Zernike expansion orders are compared. A method is shown that uses simple mathematical interpolation techniques based on normal data to rapidly determine the clinical significance of aberrations, without concern for pupil and expansion order. Methods Corneal topography (Tomey, Inc.; Nagoya, Japan) from 30 normal corneas was collected and the corneal wavefront error analyzed by Zernike polynomial decomposition into specific aberration types for pupil diameters of 3, 5, 7, and 10 mm and Zernike expansion orders of 6, 8, 10 and 12. Using this 4×4 matrix of pupil sizes and fitting orders, best-fitting 3-dimensional functions were determined for the mean and standard deviation of the RMS error for specific aberrations. The functions were encoded into software to determine the significance of data acquired from non-normal cases. Results The best-fitting functions for 6 types of aberrations were determined: defocus, astigmatism, prism, coma, spherical aberration, and all higher-order aberrations. A clinical screening method of color-coding the significance of aberrations in normal, postoperative LASIK, and keratoconus cases having different pupil sizes and different expansion orders is demonstrated. Conclusions A method to calibrate wavefront aberrometry devices by using a standard sample of normal cases was devised. This method could be potentially useful in clinical studies involving patients with uncontrolled pupil sizes or in studies that compare data from aberrometers that use different Zernike fitting-order algorithms. PMID:22157570

  1. Bar Code Medication Administration Technology: Characterization of High-Alert Medication Triggers and Clinician Workarounds.

    PubMed

    Miller, Daniel F; Fortier, Christopher R; Garrison, Kelli L

    2011-02-01

    Bar code medication administration (BCMA) technology is gaining acceptance for its ability to prevent medication administration errors. However, studies suggest that improper use of BCMA technology can yield unsatisfactory error prevention and introduction of new potential medication errors. To evaluate the incidence of high-alert medication BCMA triggers and alert types and discuss the type of nursing and pharmacy workarounds occurring with the use of BCMA technology and the electronic medication administration record (eMAR). Medication scanning and override reports from January 1, 2008, through November 30, 2008, for all adult medical/surgical units were retrospectively evaluated for high-alert medication system triggers, alert types, and override reason documentation. An observational study of nursing workarounds on an adult medicine step-down unit was performed and an analysis of potential pharmacy workarounds affecting BCMA and the eMAR was also conducted. Seventeen percent of scanned medications triggered an error alert of which 55% were for high-alert medications. Insulin aspart, NPH insulin, hydromorphone, potassium chloride, and morphine were the top 5 high-alert medications that generated alert messages. Clinician override reasons for alerts were documented in only 23% of administrations. Observational studies assessing for nursing workarounds revealed a median of 3 clinician workarounds per administration. Specific nursing workarounds included a failure to scan medications/patient armband and scanning the bar code once the dosage has been removed from the unit-dose packaging. Analysis of pharmacy order entry process workarounds revealed the potential for missed doses, duplicate doses, and doses being scheduled at the wrong time. BCMA has the potential to prevent high-alert medication errors by alerting clinicians through alert messages. Nursing and pharmacy workarounds can limit the recognition of optimal safety outcomes and therefore workflow processes must be continually analyzed and restructured to yield the intended full benefits of BCMA technology. © 2011 SAGE Publications.

  2. Refractive errors in Aminu Kano Teaching Hospital, Kano Nigeria.

    PubMed

    Lawan, Abdu; Eme, Okpo

    2011-12-01

    The aim of the study is to retrospectively determine the pattern of refractive errors seen in the eye clinic of Aminu Kano Teaching Hospital, Kano-Nigeria from January to December, 2008. The clinic refraction register was used to retrieve the case folders of all patients refracted during the review period. Information extracted includes patient's age, sex, and types of refractive error. All patients had basic eye examination (to rule out other causes of subnormal vision) including intra ocular pressure measurement and streak retinoscopy at two third meter working distance. The final subjective refraction correction given to the patients was used to categorise the type of refractive error. Refractive errors was observed in 1584 patients and accounted for 26.9% of clinic attendance. There were more females than males (M: F=1.0: 1.2). The common types of refractive errors are presbyopia in 644 patients (40%), various types of astigmatism in 527 patients (33%), myopia in 216 patients (14%), hypermetropia in 171 patients (11%) and aphakia in 26 patients (2%). Refractive errors are common causes of presentation in the eye clinic. Identification and correction of refractive errors should be an integral part of eye care delivery.

  3. Goal-oriented explicit residual-type error estimates in XFEM

    NASA Astrophysics Data System (ADS)

    Rüter, Marcus; Gerasimov, Tymofiy; Stein, Erwin

    2013-08-01

    A goal-oriented a posteriori error estimator is derived to control the error obtained while approximately evaluating a quantity of engineering interest, represented in terms of a given linear or nonlinear functional, using extended finite elements of Q1 type. The same approximation method is used to solve the dual problem as required for the a posteriori error analysis. It is shown that for both problems to be solved numerically the same singular enrichment functions can be used. The goal-oriented error estimator presented can be classified as explicit residual type, i.e. the residuals of the approximations are used directly to compute upper bounds on the error of the quantity of interest. This approach therefore extends the explicit residual-type error estimator for classical energy norm error control as recently presented in Gerasimov et al. (Int J Numer Meth Eng 90:1118-1155, 2012a). Without loss of generality, the a posteriori error estimator is applied to the model problem of linear elastic fracture mechanics. Thus, emphasis is placed on the fracture criterion, here the J-integral, as the chosen quantity of interest. Finally, various illustrative numerical examples are presented where, on the one hand, the error estimator is compared to its finite element counterpart and, on the other hand, improved enrichment functions, as introduced in Gerasimov et al. (2012b), are discussed.

  4. Methods for Addressing Technology-Induced Errors: The Current State

    PubMed Central

    Dexheimer, J. W.; Hullin Lucay Cossio, C.; Gong, Y.; Jensen, S.; Kaipio, J.; Kennebeck, S.; Kirkendall, E.; Kushniruk, A. W.; Kuziemsky, C.; Marcilly, R.; Röhrig, R.; Saranto, K.; Senathirajah, Y.; Weber, J.; Takeda, H.

    2016-01-01

    Summary Objectives The objectives of this paper are to review and discuss the methods that are being used internationally to report on, mitigate, and eliminate technology-induced errors. Methods The IMIA Working Group for Health Informatics for Patient Safety worked together to review and synthesize some of the main methods and approaches associated with technology-induced error reporting, reduction, and mitigation. The work involved a review of the evidence-based literature as well as guideline publications specific to health informatics. Results The paper presents a rich overview of current approaches, issues, and methods associated with: (1) safe HIT design, (2) safe HIT implementation, (3) reporting on technology-induced errors, (4) technology-induced error analysis, and (5) health information technology (HIT) risk management. The work is based on research from around the world. Conclusions Internationally, researchers have been developing methods that can be used to identify, report on, mitigate, and eliminate technology-induced errors. Although there remain issues and challenges associated with the methodologies, they have been shown to improve the quality and safety of HIT. Since the first publications documenting technology-induced errors in healthcare in 2005, we have seen in a short 10 years researchers develop ways of identifying and addressing these types of errors. We have also seen organizations begin to use these approaches. Knowledge has been translated into practice in a short ten years whereas the norm for other research areas is of 20 years. PMID:27830228

  5. Surface errors without semantic impairment in acquired dyslexia: a voxel-based lesion–symptom mapping study

    PubMed Central

    Pillay, Sara B.; Humphries, Colin J.; Gross, William L.; Graves, William W.; Book, Diane S.

    2016-01-01

    Patients with surface dyslexia have disproportionate difficulty pronouncing irregularly spelled words (e.g. pint), suggesting impaired use of lexical-semantic information to mediate phonological retrieval. Patients with this deficit also make characteristic ‘regularization’ errors, in which an irregularly spelled word is mispronounced by incorrect application of regular spelling-sound correspondences (e.g. reading plaid as ‘played’), indicating over-reliance on sublexical grapheme–phoneme correspondences. We examined the neuroanatomical correlates of this specific error type in 45 patients with left hemisphere chronic stroke. Voxel-based lesion–symptom mapping showed a strong positive relationship between the rate of regularization errors and damage to the posterior half of the left middle temporal gyrus. Semantic deficits on tests of single-word comprehension were generally mild, and these deficits were not correlated with the rate of regularization errors. Furthermore, the deep occipital-temporal white matter locus associated with these mild semantic deficits was distinct from the lesion site associated with regularization errors. Thus, in contrast to patients with surface dyslexia and semantic impairment from anterior temporal lobe degeneration, surface errors in our patients were not related to a semantic deficit. We propose that these patients have an inability to link intact semantic representations with phonological representations. The data provide novel evidence for a post-semantic mechanism mediating the production of surface errors, and suggest that the posterior middle temporal gyrus may compute an intermediate representation linking semantics with phonology. PMID:26966139

  6. Economic impact of medication error: a systematic review.

    PubMed

    Walsh, Elaine K; Hansen, Christina Raae; Sahm, Laura J; Kearney, Patricia M; Doherty, Edel; Bradley, Colin P

    2017-05-01

    Medication error is a significant source of morbidity and mortality among patients. Clinical and cost-effectiveness evidence are required for the implementation of quality of care interventions. Reduction of error-related cost is a key potential benefit of interventions addressing medication error. The aim of this review was to describe and quantify the economic burden associated with medication error. PubMed, Cochrane, Embase, CINAHL, EconLit, ABI/INFORM, Business Source Complete were searched. Studies published 2004-2016 assessing the economic impact of medication error were included. Cost values were expressed in Euro 2015. A narrative synthesis was performed. A total of 4572 articles were identified from database searching, and 16 were included in the review. One study met all applicable quality criteria. Fifteen studies expressed economic impact in monetary terms. Mean cost per error per study ranged from €2.58 to €111 727.08. Healthcare costs were used to measure economic impact in 15 of the included studies with one study measuring litigation costs. Four studies included costs incurred in primary care with the remaining 12 measuring hospital costs. Five studies looked at general medication error in a general population with 11 studies reporting the economic impact of an individual type of medication error or error within a specific patient population. Considerable variability existed between studies in terms of financial cost, patients, settings and errors included. Many were of poor quality. Assessment of economic impact was conducted predominantly in the hospital setting with little assessment of primary care impact. Limited parameters were used to establish economic impact. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  7. A critical eye: praise directed toward traits increases children's eye fixations on errors and decreases motivation.

    PubMed

    Zentall, Shannon R; Morris, Bradley J

    2012-12-01

    Although there is evidence that praise of different types (i.e., generic vs. nongeneric) influences motivation, it is unclear how this occurs. Generic praise (e.g., "You are smart") conveys that a child possesses a trait responsible for their performance, whereas nongeneric praise (e.g., "You worked hard") conveys that performance is effort-based. Because praise conveys the basis for success, praise may change the interpretation and salience of errors. Specifically, generic praise may highlight the threatening nature of error (i.e., the child does not possess this trait). Because attention is drawn to threats in the environment, we expected generic praise to increase attention to error. We used eyetracking to measure implicit responses to errors (i.e., visual attention: fixation counts and durations) in order to determine the relation between visual attention and verbal reports of motivation (persistence and self-evaluations) in 30 four- to seven-year-old children. Children first saw pictures attributed to them, for which they received either generic or nongeneric praise. The children then saw pictures attributed to them that contained errors--that is, missing features. As a pretest and posttest, the children saw pictures that were "drawn by other children," half of which contained errors. The results indicated that children who received generic praise ("you are a good drawer") produced more and longer fixations on errors, both their "own" and on "other children's," than did children who received nongeneric praise ("you did a good job drawing"). More fixations on errors were related to lower persistence and lower self-evaluations. These results suggest that generic praise increases attention to errors because error threatens the possession of a positive trait.

  8. FRamework Assessing Notorious Contributing Influences for Error (FRANCIE): Perspective on Taxonomy Development to Support Error Reporting and Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lon N. Haney; David I. Gertman

    2003-04-01

    Beginning in the 1980s a primary focus of human reliability analysis was estimation of human error probabilities. However, detailed qualitative modeling with comprehensive representation of contextual variables often was lacking. This was likely due to the lack of comprehensive error and performance shaping factor taxonomies, and the limited data available on observed error rates and their relationship to specific contextual variables. In the mid 90s Boeing, America West Airlines, NASA Ames Research Center and INEEL partnered in a NASA sponsored Advanced Concepts grant to: assess the state of the art in human error analysis, identify future needs for human errormore » analysis, and develop an approach addressing these needs. Identified needs included the need for a method to identify and prioritize task and contextual characteristics affecting human reliability. Other needs identified included developing comprehensive taxonomies to support detailed qualitative modeling and to structure meaningful data collection efforts across domains. A result was the development of the FRamework Assessing Notorious Contributing Influences for Error (FRANCIE) with a taxonomy for airline maintenance tasks. The assignment of performance shaping factors to generic errors by experts proved to be valuable to qualitative modeling. Performance shaping factors and error types from such detailed approaches can be used to structure error reporting schemes. In a recent NASA Advanced Human Support Technology grant FRANCIE was refined, and two new taxonomies for use on space missions were developed. The development, sharing, and use of error taxonomies, and the refinement of approaches for increased fidelity of qualitative modeling is offered as a means to help direct useful data collection strategies.« less

  9. Medication Errors in Pediatric Anesthesia: A Report From the Wake Up Safe Quality Improvement Initiative.

    PubMed

    Lobaugh, Lauren M Y; Martin, Lizabeth D; Schleelein, Laura E; Tyler, Donald C; Litman, Ronald S

    2017-09-01

    Wake Up Safe is a quality improvement initiative of the Society for Pediatric Anesthesia that contains a deidentified registry of serious adverse events occurring in pediatric anesthesia. The aim of this study was to describe and characterize reported medication errors to find common patterns amenable to preventative strategies. In September 2016, we analyzed approximately 6 years' worth of medication error events reported to Wake Up Safe. Medication errors were classified by: (1) medication category; (2) error type by phase of administration: prescribing, preparation, or administration; (3) bolus or infusion error; (4) provider type and level of training; (5) harm as defined by the National Coordinating Council for Medication Error Reporting and Prevention; and (6) perceived preventability. From 2010 to the time of our data analysis in September 2016, 32 institutions had joined and submitted data on 2087 adverse events during 2,316,635 anesthetics. These reports contained details of 276 medication errors, which comprised the third highest category of events behind cardiac and respiratory related events. Medication errors most commonly involved opioids and sedative/hypnotics. When categorized by phase of handling, 30 events occurred during preparation, 67 during prescribing, and 179 during administration. The most common error type was accidental administration of the wrong dose (N = 84), followed by syringe swap (accidental administration of the wrong syringe, N = 49). Fifty-seven (21%) reported medication errors involved medications prepared as infusions as opposed to 1 time bolus administrations. Medication errors were committed by all types of anesthesia providers, most commonly by attendings. Over 80% of reported medication errors reached the patient and more than half of these events caused patient harm. Fifteen events (5%) required a life sustaining intervention. Nearly all cases (97%) were judged to be either likely or certainly preventable. Our findings characterize the most common types of medication errors in pediatric anesthesia practice and provide guidance on future preventative strategies. Many of these errors will be almost entirely preventable with the use of prefilled medication syringes to avoid accidental ampule swap, bar-coding at the point of medication administration to prevent syringe swap and to confirm the proper dose, and 2-person checking of medication infusions for accuracy.

  10. Math Error Types and Correlates in Adolescents with and without Attention Deficit Hyperactivity Disorder

    PubMed Central

    Capodieci, Agnese; Martinussen, Rhonda

    2017-01-01

    Objective: The aim of this study was to examine the types of errors made by youth with and without a parent-reported diagnosis of attention deficit and hyperactivity disorder (ADHD) on a math fluency task and investigate the association between error types and youths’ performance on measures of processing speed and working memory. Method: Participants included 30 adolescents with ADHD and 39 typically developing peers between 14 and 17 years old matched in age and IQ. All youth completed standardized measures of math calculation and fluency as well as two tests of working memory and processing speed. Math fluency error patterns were examined. Results: Adolescents with ADHD showed less proficient math fluency despite having similar math calculation scores as their peers. Group differences were also observed in error types with youth with ADHD making more switch errors than their peers. Conclusion: This research has important clinical applications for the assessment and intervention on math ability in students with ADHD. PMID:29075227

  11. Math Error Types and Correlates in Adolescents with and without Attention Deficit Hyperactivity Disorder.

    PubMed

    Capodieci, Agnese; Martinussen, Rhonda

    2017-01-01

    Objective: The aim of this study was to examine the types of errors made by youth with and without a parent-reported diagnosis of attention deficit and hyperactivity disorder (ADHD) on a math fluency task and investigate the association between error types and youths' performance on measures of processing speed and working memory. Method: Participants included 30 adolescents with ADHD and 39 typically developing peers between 14 and 17 years old matched in age and IQ. All youth completed standardized measures of math calculation and fluency as well as two tests of working memory and processing speed. Math fluency error patterns were examined. Results: Adolescents with ADHD showed less proficient math fluency despite having similar math calculation scores as their peers. Group differences were also observed in error types with youth with ADHD making more switch errors than their peers. Conclusion: This research has important clinical applications for the assessment and intervention on math ability in students with ADHD.

  12. Everyday action in schizophrenia: performance patterns and underlying cognitive mechanisms.

    PubMed

    Kessler, Rachel K; Giovannetti, Tania; MacMullen, Laura R

    2007-07-01

    Everyday action is impaired among individuals with schizophrenia, yet few studies have characterized the nature of this deficit using performance-based measures. This study examined the performance of 20 individuals with schizophrenia or schizoaffective disorder on the Naturalistic Action Test (M. F. Schwartz, L. J. Buxbaum, M. Ferraro, T. Veramonti, & M. Segal, 2003). Performance was coded to examine overall impairment, task accomplishment, and error patterns and was compared with that of healthy controls (n = 28) and individuals with mild dementia (n = 23). Additionally, 2 competing accounts of everyday action deficits, the resource theory and an executive account, were evaluated. When compared with controls, the participants with schizophrenia demonstrated impaired performance. Relative to dementia patients, participants with schizophrenia obtained higher accomplishment scores but committed comparable rates of errors. Moreover, distributions of error types for the 2 groups differed, with the participants with schizophrenia demonstrating greater proportions of errors associated with executive dysfunction. This is the 1st study to show different Naturalistic Action Test performance patterns between 2 neurologically impaired populations. The distinct performance pattern demonstrated by individuals with schizophrenia reflects specific deficits in executive function.

  13. An Empirical State Error Covariance Matrix Orbit Determination Example

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2015-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance is suspect. In its most straight forward form, the technique only requires supplemental calculations to be added to existing batch estimation algorithms. In the current problem being studied a truth model making use of gravity with spherical, J2 and J4 terms plus a standard exponential type atmosphere with simple diurnal and random walk components is used. The ability of the empirical state error covariance matrix to account for errors is investigated under four scenarios during orbit estimation. These scenarios are: exact modeling under known measurement errors, exact modeling under corrupted measurement errors, inexact modeling under known measurement errors, and inexact modeling under corrupted measurement errors. For this problem a simple analog of a distributed space surveillance network is used. The sensors in this network make only range measurements and with simple normally distributed measurement errors. The sensors are assumed to have full horizon to horizon viewing at any azimuth. For definiteness, an orbit at the approximate altitude and inclination of the International Space Station is used for the study. The comparison analyses of the data involve only total vectors. No investigation of specific orbital elements is undertaken. The total vector analyses will look at the chisquare values of the error in the difference between the estimated state and the true modeled state using both the empirical and theoretical error covariance matrices for each of scenario.

  14. Rank score and permutation testing alternatives for regression quantile estimates

    USGS Publications Warehouse

    Cade, B.S.; Richards, J.D.; Mielke, P.W.

    2006-01-01

    Performance of quantile rank score tests used for hypothesis testing and constructing confidence intervals for linear quantile regression estimates (0 ≤ τ ≤ 1) were evaluated by simulation for models with p = 2 and 6 predictors, moderate collinearity among predictors, homogeneous and hetero-geneous errors, small to moderate samples (n = 20–300), and central to upper quantiles (0.50–0.99). Test statistics evaluated were the conventional quantile rank score T statistic distributed as χ2 random variable with q degrees of freedom (where q parameters are constrained by H 0:) and an F statistic with its sampling distribution approximated by permutation. The permutation F-test maintained better Type I errors than the T-test for homogeneous error models with smaller n and more extreme quantiles τ. An F distributional approximation of the F statistic provided some improvements in Type I errors over the T-test for models with > 2 parameters, smaller n, and more extreme quantiles but not as much improvement as the permutation approximation. Both rank score tests required weighting to maintain correct Type I errors when heterogeneity under the alternative model increased to 5 standard deviations across the domain of X. A double permutation procedure was developed to provide valid Type I errors for the permutation F-test when null models were forced through the origin. Power was similar for conditions where both T- and F-tests maintained correct Type I errors but the F-test provided some power at smaller n and extreme quantiles when the T-test had no power because of excessively conservative Type I errors. When the double permutation scheme was required for the permutation F-test to maintain valid Type I errors, power was less than for the T-test with decreasing sample size and increasing quantiles. Confidence intervals on parameters and tolerance intervals for future predictions were constructed based on test inversion for an example application relating trout densities to stream channel width:depth.

  15. Ensemble codes involving hippocampal neurons are at risk during delayed performance tests.

    PubMed

    Hampson, R E; Deadwyler, S A

    1996-11-26

    Multielectrode recording techniques were used to record ensemble activity from 10 to 16 simultaneously active CA1 and CA3 neurons in the rat hippocampus during performance of a spatial delayed-nonmatch-to-sample task. Extracted sources of variance were used to assess the nature of two different types of errors that accounted for 30% of total trials. The two types of errors included ensemble "miscodes" of sample phase information and errors associated with delay-dependent corruption or disappearance of sample information at the time of the nonmatch response. Statistical assessment of trial sequences and associated "strength" of hippocampal ensemble codes revealed that miscoded error trials always followed delay-dependent error trials in which encoding was "weak," indicating that the two types of errors were "linked." It was determined that the occurrence of weakly encoded, delay-dependent error trials initiated an ensemble encoding "strategy" that increased the chances of being correct on the next trial and avoided the occurrence of further delay-dependent errors. Unexpectedly, the strategy involved "strongly" encoding response position information from the prior (delay-dependent) error trial and carrying it forward to the sample phase of the next trial. This produced a miscode type error on trials in which the "carried over" information obliterated encoding of the sample phase response on the next trial. Application of this strategy, irrespective of outcome, was sufficient to reorient the animal to the proper between trial sequence of response contingencies (nonmatch-to-sample) and boost performance to 73% correct on subsequent trials. The capacity for ensemble analyses of strength of information encoding combined with statistical assessment of trial sequences therefore provided unique insight into the "dynamic" nature of the role hippocampus plays in delay type memory tasks.

  16. Errors Analysis of Students in Mathematics Department to Learn Plane Geometry

    NASA Astrophysics Data System (ADS)

    Mirna, M.

    2018-04-01

    This article describes the results of qualitative descriptive research that reveal the locations, types and causes of student error in answering the problem of plane geometry at the problem-solving level. Answers from 59 students on three test items informed that students showed errors ranging from understanding the concepts and principles of geometry itself to the error in applying it to problem solving. Their type of error consists of concept errors, principle errors and operational errors. The results of reflection with four subjects reveal the causes of the error are: 1) student learning motivation is very low, 2) in high school learning experience, geometry has been seen as unimportant, 3) the students' experience using their reasoning in solving the problem is very less, and 4) students' reasoning ability is still very low.

  17. Acoustic evidence for phonologically mismatched speech errors.

    PubMed

    Gormley, Andrea

    2015-04-01

    Speech errors are generally said to accommodate to their new phonological context. This accommodation has been validated by several transcription studies. The transcription methodology is not the best choice for detecting errors at this level, however, as this type of error can be difficult to perceive. This paper presents an acoustic analysis of speech errors that uncovers non-accommodated or mismatch errors. A mismatch error is a sub-phonemic error that results in an incorrect surface phonology. This type of error could arise during the processing of phonological rules or they could be made at the motor level of implementation. The results of this work have important implications for both experimental and theoretical research. For experimentalists, it validates the tools used for error induction and the acoustic determination of errors free of the perceptual bias. For theorists, this methodology can be used to test the nature of the processes proposed in language production.

  18. System Specification for ADA Integrated Environment Type A AIE(1).

    DTIC Science & Technology

    1982-11-12

    includes program library support tools and the linker. The program library is the means by which the AIE supports independent, modular program development...KAPSE TUOL COMMUNICATION Package KAPSE-KAPS ECOMMUNICATION (KAPSE.RTS) (Most of it, except the "Language-Defined Packages" CPC) The overall...including classification of errors by severity; 5. perform optimizations for timing and/or space, without changing the functional meaning of a program by the

  19. SU-E-T-789: Validation of 3DVH Accuracy On Quantifying Delivery Errors Based On Clinical Relevant DVH Metrics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, T; Kumaraswamy, L

    Purpose: Detection of treatment delivery errors is important in radiation therapy. However, accurate quantification of delivery errors is also of great importance. This study aims to evaluate the 3DVH software’s ability to accurately quantify delivery errors. Methods: Three VMAT plans (prostate, H&N and brain) were randomly chosen for this study. First, we evaluated whether delivery errors could be detected by gamma evaluation. Conventional per-beam IMRT QA was performed with the ArcCHECK diode detector for the original plans and for the following modified plans: (1) induced dose difference error up to ±4.0% and (2) control point (CP) deletion (3 to 10more » CPs were deleted) (3) gantry angle shift error (3 degree uniformly shift). 2D and 3D gamma evaluation were performed for all plans through SNC Patient and 3DVH, respectively. Subsequently, we investigated the accuracy of 3DVH analysis for all cases. This part evaluated, using the Eclipse TPS plans as standard, whether 3DVH accurately can model the changes in clinically relevant metrics caused by the delivery errors. Results: 2D evaluation seemed to be more sensitive to delivery errors. The average differences between ECLIPSE predicted and 3DVH results for each pair of specific DVH constraints were within 2% for all three types of error-induced treatment plans, illustrating the fact that 3DVH is fairly accurate in quantifying the delivery errors. Another interesting observation was that even though the gamma pass rates for the error plans are high, the DVHs showed significant differences between original plan and error-induced plans in both Eclipse and 3DVH analysis. Conclusion: The 3DVH software is shown to accurately quantify the error in delivered dose based on clinically relevant DVH metrics, where a conventional gamma based pre-treatment QA might not necessarily detect.« less

  20. Does raising type 1 error rate improve power to detect interactions in linear regression models? A simulation study.

    PubMed

    Durand, Casey P

    2013-01-01

    Statistical interactions are a common component of data analysis across a broad range of scientific disciplines. However, the statistical power to detect interactions is often undesirably low. One solution is to elevate the Type 1 error rate so that important interactions are not missed in a low power situation. To date, no study has quantified the effects of this practice on power in a linear regression model. A Monte Carlo simulation study was performed. A continuous dependent variable was specified, along with three types of interactions: continuous variable by continuous variable; continuous by dichotomous; and dichotomous by dichotomous. For each of the three scenarios, the interaction effect sizes, sample sizes, and Type 1 error rate were varied, resulting in a total of 240 unique simulations. In general, power to detect the interaction effect was either so low or so high at α = 0.05 that raising the Type 1 error rate only served to increase the probability of including a spurious interaction in the model. A small number of scenarios were identified in which an elevated Type 1 error rate may be justified. Routinely elevating Type 1 error rate when testing interaction effects is not an advisable practice. Researchers are best served by positing interaction effects a priori and accounting for them when conducting sample size calculations.

  1. Determination of Type I Error Rates and Power of Answer Copying Indices under Various Conditions

    ERIC Educational Resources Information Center

    Yormaz, Seha; Sünbül, Önder

    2017-01-01

    This study aims to determine the Type I error rates and power of S[subscript 1] , S[subscript 2] indices and kappa statistic at detecting copying on multiple-choice tests under various conditions. It also aims to determine how copying groups are created in order to calculate how kappa statistics affect Type I error rates and power. In this study,…

  2. In Vivo Myeloperoxidase Imaging and Flow Cytometry Analysis of Intestinal Myeloid Cells.

    PubMed

    Hülsdünker, Jan; Zeiser, Robert

    2016-01-01

    Myeloperoxidase (MPO) imaging is a non-invasive method to detect cells that produce the enzyme MPO that is most abundant in neutrophils, macrophages, and inflammatory monocytes. While lacking specificity for any of these three cell types, MPO imaging can provide guidance for further flow cytometry-based analysis of tissues where these cell types reside. Isolation of leukocytes from the intestinal tract is an error-prone procedure. Here, we describe a protocol for intestinal leukocyte isolation that works reliable in our hands and allows for flow cytometry-based analysis, in particular of neutrophils.

  3. Comparing models of change to estimate the mediated effect in the pretest-posttest control group design

    PubMed Central

    Valente, Matthew J.; MacKinnon, David P.

    2017-01-01

    Models to assess mediation in the pretest-posttest control group design are understudied in the behavioral sciences even though it is the design of choice for evaluating experimental manipulations. The paper provides analytical comparisons of the four most commonly used models used to estimate the mediated effect in this design: Analysis of Covariance (ANCOVA), difference score, residualized change score, and cross-sectional model. Each of these models are fitted using a Latent Change Score specification and a simulation study assessed bias, Type I error, power, and confidence interval coverage of the four models. All but the ANCOVA model make stringent assumptions about the stability and cross-lagged relations of the mediator and outcome that may not be plausible in real-world applications. When these assumptions do not hold, Type I error and statistical power results suggest that only the ANCOVA model has good performance. The four models are applied to an empirical example. PMID:28845097

  4. Comparing models of change to estimate the mediated effect in the pretest-posttest control group design.

    PubMed

    Valente, Matthew J; MacKinnon, David P

    2017-01-01

    Models to assess mediation in the pretest-posttest control group design are understudied in the behavioral sciences even though it is the design of choice for evaluating experimental manipulations. The paper provides analytical comparisons of the four most commonly used models used to estimate the mediated effect in this design: Analysis of Covariance (ANCOVA), difference score, residualized change score, and cross-sectional model. Each of these models are fitted using a Latent Change Score specification and a simulation study assessed bias, Type I error, power, and confidence interval coverage of the four models. All but the ANCOVA model make stringent assumptions about the stability and cross-lagged relations of the mediator and outcome that may not be plausible in real-world applications. When these assumptions do not hold, Type I error and statistical power results suggest that only the ANCOVA model has good performance. The four models are applied to an empirical example.

  5. A comparison of exact tests for trend with binary endpoints using Bartholomew's statistic.

    PubMed

    Consiglio, J D; Shan, G; Wilding, G E

    2014-01-01

    Tests for trend are important in a number of scientific fields when trends associated with binary variables are of interest. Implementing the standard Cochran-Armitage trend test requires an arbitrary choice of scores assigned to represent the grouping variable. Bartholomew proposed a test for qualitatively ordered samples using asymptotic critical values, but type I error control can be problematic in finite samples. To our knowledge, use of the exact probability distribution has not been explored, and we study its use in the present paper. Specifically we consider an approach based on conditioning on both sets of marginal totals and three unconditional approaches where only the marginal totals corresponding to the group sample sizes are treated as fixed. While slightly conservative, all four tests are guaranteed to have actual type I error rates below the nominal level. The unconditional tests are found to exhibit far less conservatism than the conditional test and thereby gain a power advantage.

  6. Requirements for Minimum Sample Size for Sensitivity and Specificity Analysis

    PubMed Central

    Adnan, Tassha Hilda

    2016-01-01

    Sensitivity and specificity analysis is commonly used for screening and diagnostic tests. The main issue researchers face is to determine the sufficient sample sizes that are related with screening and diagnostic studies. Although the formula for sample size calculation is available but concerning majority of the researchers are not mathematicians or statisticians, hence, sample size calculation might not be easy for them. This review paper provides sample size tables with regards to sensitivity and specificity analysis. These tables were derived from formulation of sensitivity and specificity test using Power Analysis and Sample Size (PASS) software based on desired type I error, power and effect size. The approaches on how to use the tables were also discussed. PMID:27891446

  7. Evaluation of drug administration errors in a teaching hospital

    PubMed Central

    2012-01-01

    Background Medication errors can occur at any of the three steps of the medication use process: prescribing, dispensing and administration. We aimed to determine the incidence, type and clinical importance of drug administration errors and to identify risk factors. Methods Prospective study based on disguised observation technique in four wards in a teaching hospital in Paris, France (800 beds). A pharmacist accompanied nurses and witnessed the preparation and administration of drugs to all patients during the three drug rounds on each of six days per ward. Main outcomes were number, type and clinical importance of errors and associated risk factors. Drug administration error rate was calculated with and without wrong time errors. Relationship between the occurrence of errors and potential risk factors were investigated using logistic regression models with random effects. Results Twenty-eight nurses caring for 108 patients were observed. Among 1501 opportunities for error, 415 administrations (430 errors) with one or more errors were detected (27.6%). There were 312 wrong time errors, ten simultaneously with another type of error, resulting in an error rate without wrong time error of 7.5% (113/1501). The most frequently administered drugs were the cardiovascular drugs (425/1501, 28.3%). The highest risks of error in a drug administration were for dermatological drugs. No potentially life-threatening errors were witnessed and 6% of errors were classified as having a serious or significant impact on patients (mainly omission). In multivariate analysis, the occurrence of errors was associated with drug administration route, drug classification (ATC) and the number of patient under the nurse's care. Conclusion Medication administration errors are frequent. The identification of its determinants helps to undertake designed interventions. PMID:22409837

  8. Evaluation of drug administration errors in a teaching hospital.

    PubMed

    Berdot, Sarah; Sabatier, Brigitte; Gillaizeau, Florence; Caruba, Thibaut; Prognon, Patrice; Durieux, Pierre

    2012-03-12

    Medication errors can occur at any of the three steps of the medication use process: prescribing, dispensing and administration. We aimed to determine the incidence, type and clinical importance of drug administration errors and to identify risk factors. Prospective study based on disguised observation technique in four wards in a teaching hospital in Paris, France (800 beds). A pharmacist accompanied nurses and witnessed the preparation and administration of drugs to all patients during the three drug rounds on each of six days per ward. Main outcomes were number, type and clinical importance of errors and associated risk factors. Drug administration error rate was calculated with and without wrong time errors. Relationship between the occurrence of errors and potential risk factors were investigated using logistic regression models with random effects. Twenty-eight nurses caring for 108 patients were observed. Among 1501 opportunities for error, 415 administrations (430 errors) with one or more errors were detected (27.6%). There were 312 wrong time errors, ten simultaneously with another type of error, resulting in an error rate without wrong time error of 7.5% (113/1501). The most frequently administered drugs were the cardiovascular drugs (425/1501, 28.3%). The highest risks of error in a drug administration were for dermatological drugs. No potentially life-threatening errors were witnessed and 6% of errors were classified as having a serious or significant impact on patients (mainly omission). In multivariate analysis, the occurrence of errors was associated with drug administration route, drug classification (ATC) and the number of patient under the nurse's care. Medication administration errors are frequent. The identification of its determinants helps to undertake designed interventions.

  9. Localized Glaucomatous Change Detection within the Proper Orthogonal Decomposition Framework

    PubMed Central

    Balasubramanian, Madhusudhanan; Kriegman, David J.; Bowd, Christopher; Holst, Michael; Weinreb, Robert N.; Sample, Pamela A.; Zangwill, Linda M.

    2012-01-01

    Purpose. To detect localized glaucomatous structural changes using proper orthogonal decomposition (POD) framework with false-positive control that minimizes confirmatory follow-ups, and to compare the results to topographic change analysis (TCA). Methods. We included 167 participants (246 eyes) with ≥4 Heidelberg Retina Tomograph (HRT)-II exams from the Diagnostic Innovations in Glaucoma Study; 36 eyes progressed by stereo-photographs or visual fields. All other patient eyes (n = 210) were non-progressing. Specificities were evaluated using 21 normal eyes. Significance of change at each HRT superpixel between each follow-up and its nearest baseline (obtained using POD) was estimated using mixed-effects ANOVA. Locations with significant reduction in retinal height (red pixels) were determined using Bonferroni, Lehmann-Romano k-family-wise error rate (k-FWER), and Benjamini-Hochberg false discovery rate (FDR) type I error control procedures. Observed positive rate (OPR) in each follow-up was calculated as a ratio of number of red pixels within disk to disk size. Progression by POD was defined as one or more follow-ups with OPR greater than the anticipated false-positive rate. TCA was evaluated using the recently proposed liberal, moderate, and conservative progression criteria. Results. Sensitivity in progressors, specificity in normals, and specificity in non-progressors, respectively, were POD-Bonferroni = 100%, 0%, and 0%; POD k-FWER = 78%, 86%, and 43%; POD-FDR = 78%, 86%, and 43%; POD k-FWER with retinal height change ≥50 μm = 61%, 95%, and 60%; TCA-liberal = 86%, 62%, and 21%; TCA-moderate = 53%, 100%, and 70%; and TCA-conservative = 17%, 100%, and 84%. Conclusions. With a stronger control of type I errors, k-FWER in POD framework minimized confirmatory follow-ups while providing diagnostic accuracy comparable to TCA. Thus, POD with k-FWER shows promise to reduce the number of confirmatory follow-ups required for clinical care and studies evaluating new glaucoma treatments. (ClinicalTrials.gov number, NCT00221897.) PMID:22491406

  10. Quantifying the burden of opioid medication errors in adult oncology and palliative care settings: A systematic review.

    PubMed

    Heneka, Nicole; Shaw, Tim; Rowett, Debra; Phillips, Jane L

    2016-06-01

    Opioids are the primary pharmacological treatment for cancer pain and, in the palliative care setting, are routinely used to manage symptoms at the end of life. Opioids are one of the most frequently reported drug classes in medication errors causing patient harm. Despite their widespread use, little is known about the incidence and impact of opioid medication errors in oncology and palliative care settings. To determine the incidence, types and impact of reported opioid medication errors in adult oncology and palliative care patient settings. A systematic review. Five electronic databases and the grey literature were searched from 1980 to August 2014. Empirical studies published in English, reporting data on opioid medication error incidence, types or patient impact, within adult oncology and/or palliative care services, were included. Popay's narrative synthesis approach was used to analyse data. Five empirical studies were included in this review. Opioid error incidence rate was difficult to ascertain as each study focussed on a single narrow area of error. The predominant error type related to deviation from opioid prescribing guidelines, such as incorrect dosing intervals. None of the included studies reported the degree of patient harm resulting from opioid errors. This review has highlighted the paucity of the literature examining opioid error incidence, types and patient impact in adult oncology and palliative care settings. Defining, identifying and quantifying error reporting practices for these populations should be an essential component of future oncology and palliative care quality and safety initiatives. © The Author(s) 2015.

  11. Fat and Sugar Metabolism During Exercise in Patients With Metabolic Myopathy

    ClinicalTrials.gov

    2017-08-31

    Metabolism, Inborn Errors; Lipid Metabolism, Inborn Errors; Carbohydrate Metabolism, Inborn Errors; Long-Chain 3-Hydroxyacyl-CoA Dehydrogenase Deficiency; Glycogenin-1 Deficiency (Glycogen Storage Disease Type XV); Carnitine Palmitoyl Transferase 2 Deficiency; VLCAD Deficiency; Medium-chain Acyl-CoA Dehydrogenase Deficiency; Multiple Acyl-CoA Dehydrogenase Deficiency; Carnitine Transporter Deficiency; Neutral Lipid Storage Disease; Glycogen Storage Disease Type II; Glycogen Storage Disease Type III; Glycogen Storage Disease Type IV; Glycogen Storage Disease Type V; Muscle Phosphofructokinase Deficiency; Phosphoglucomutase 1 Deficiency; Phosphoglycerate Mutase Deficiency; Phosphoglycerate Kinase Deficiency; Phosphorylase Kinase Deficiency; Beta Enolase Deficiency; Lactate Dehydrogenase Deficiency; Glycogen Synthase Deficiency

  12. Iatrogenic Errors during Root Canal Instrumentation Performed by Dental Students

    PubMed Central

    Hendi, Seyedeh Sareh; Karkehabadi, Hamed; Eskandarloo, Amir

    2018-01-01

    Introduction: The present study was set to investigate the training quality and its association with the quality of root canal therapy performed by fifth year dentistry students. Methods and Materials: A total number of 432 records of endodontic treatment performed by fifth year dentistry students were qualified to be further investigated. Radiographs were assessed by two independent endodontists. Apical transportation, apical perforation, gouging, ledge formation, and the quality of temporary restoration were error types investigated in the present study. Results: the prevalence of apical transportation, ledge formation, and apical perforation errors were significantly higher in molars in comparison with other types of teeth. The most prevalent type of error was the apical transportation, which was significantly higher in mandibular teeth. There was no significant differences among teeth in terms of other types of errors. Conclusion: The quality of training provided for dentistry students should be improved and endodontic curriculum should be modified. PMID:29692848

  13. A circadian rhythm in skill-based errors in aviation maintenance.

    PubMed

    Hobbs, Alan; Williamson, Ann; Van Dongen, Hans P A

    2010-07-01

    In workplaces where activity continues around the clock, human error has been observed to exhibit a circadian rhythm, with a characteristic peak in the early hours of the morning. Errors are commonly distinguished by the nature of the underlying cognitive failure, particularly the level of intentionality involved in the erroneous action. The Skill-Rule-Knowledge (SRK) framework of Rasmussen is used widely in the study of industrial errors and accidents. The SRK framework describes three fundamental types of error, according to whether behavior is under the control of practiced sensori-motor skill routines with minimal conscious awareness; is guided by implicit or explicit rules or expertise; or where the planning of actions requires the conscious application of domain knowledge. Up to now, examinations of circadian patterns of industrial errors have not distinguished between different types of error. Consequently, it is not clear whether all types of error exhibit the same circadian rhythm. A survey was distributed to aircraft maintenance personnel in Australia. Personnel were invited to anonymously report a safety incident and were prompted to describe, in detail, the human involvement (if any) that contributed to it. A total of 402 airline maintenance personnel reported an incident, providing 369 descriptions of human error in which the time of the incident was reported and sufficient detail was available to analyze the error. Errors were categorized using a modified version of the SRK framework, in which errors are categorized as skill-based, rule-based, or knowledge-based, or as procedure violations. An independent check confirmed that the SRK framework had been applied with sufficient consistency and reliability. Skill-based errors were the most common form of error, followed by procedure violations, rule-based errors, and knowledge-based errors. The frequency of errors was adjusted for the estimated proportion of workers present at work/each hour of the day, and the 24 h pattern of each error type was examined. Skill-based errors exhibited a significant circadian rhythm, being most prevalent in the early hours of the morning. Variation in the frequency of rule-based errors, knowledge-based errors, and procedure violations over the 24 h did not reach statistical significance. The results suggest that during the early hours of the morning, maintenance technicians are at heightened risk of "absent minded" errors involving failures to execute action plans as intended.

  14. Dark Signal Characterization of 1.7 micron cutoff devices for SNAP

    NASA Astrophysics Data System (ADS)

    Smith, R. M.; SNAP Collaboration

    2004-12-01

    We report initial progress characterizing non-photometric sources of error -- dark current, noise, and zero point drift -- for 1.7 micron cutoff HgCdTe and InGaAs detectors under development by Raytheon, Rockwell, and Sensors Unlimited for SNAP. Dark current specifications can already be met with several detector types. Changes to the manufacturing process are being explored to improve the noise reduction available through multiple sampling. In some cases, a significant number of pixels suffer from popcorn noise, with a few percent of all pixels exhibiting a ten fold noise increase. A careful study of zero point drifts is also under way, since these errors can dominate dark current, and may contribute to the noise degradation seen in long exposures.

  15. Frequency and analysis of non-clinical errors made in radiology reports using the National Integrated Medical Imaging System voice recognition dictation software.

    PubMed

    Motyer, R E; Liddy, S; Torreggiani, W C; Buckley, O

    2016-11-01

    Voice recognition (VR) dictation of radiology reports has become the mainstay of reporting in many institutions worldwide. Despite benefit, such software is not without limitations, and transcription errors have been widely reported. Evaluate the frequency and nature of non-clinical transcription error using VR dictation software. Retrospective audit of 378 finalised radiology reports. Errors were counted and categorised by significance, error type and sub-type. Data regarding imaging modality, report length and dictation time was collected. 67 (17.72 %) reports contained ≥1 errors, with 7 (1.85 %) containing 'significant' and 9 (2.38 %) containing 'very significant' errors. A total of 90 errors were identified from the 378 reports analysed, with 74 (82.22 %) classified as 'insignificant', 7 (7.78 %) as 'significant', 9 (10 %) as 'very significant'. 68 (75.56 %) errors were 'spelling and grammar', 20 (22.22 %) 'missense' and 2 (2.22 %) 'nonsense'. 'Punctuation' error was most common sub-type, accounting for 27 errors (30 %). Complex imaging modalities had higher error rates per report and sentence. Computed tomography contained 0.040 errors per sentence compared to plain film with 0.030. Longer reports had a higher error rate, with reports >25 sentences containing an average of 1.23 errors per report compared to 0-5 sentences containing 0.09. These findings highlight the limitations of VR dictation software. While most error was deemed insignificant, there were occurrences of error with potential to alter report interpretation and patient management. Longer reports and reports on more complex imaging had higher error rates and this should be taken into account by the reporting radiologist.

  16. Brain and Music: An Intraoperative Stimulation Mapping Study of a Professional Opera Singer.

    PubMed

    Riva, Marco; Casarotti, Alessandra; Comi, Alessandro; Pessina, Federico; Bello, Lorenzo

    2016-09-01

    Music is one of the most sophisticated and fascinating functions of the brain. Yet, how music is instantiated within the brain is not fully characterized. Singing is a peculiar aspect of music, in which both musical and linguistic skills are required to provide a merged vocal output. Identifying the neural correlates of this process is relevant for both clinical and research purposes. An adult white man with a presumed left temporal glioma was studied. He is a professional opera singer. A tailored music evaluation, the Montreal Battery of Evaluation of Amusia, was performed preoperatively and postoperatively, with long-term follow-up. Intraoperative stimulation mapping (ISM) with awake surgery with a specific music evaluation battery was used to identify and preserve the cortical and subcortical structures subserving music, along with standard motor-sensory and language mapping. A total resection of a grade I glioma was achieved. The Montreal Battery of Evaluation of Amusia reported an improvement in musical scores after the surgery. ISM consistently elicited several types of errors in the superior temporal gyrus and, to a lesser extent, in the inferior frontal operculum. Most errors occurred during score reading; fewer errors were elicited during the assessment of rhythm. No spontaneous errors were recorded. These areas did not overlap with eloquent sites for counting or naming. ISM and a tailored music battery enabled better characterization of a specific network within the brain subserving score reading independently from speech with long-term clinical impact. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. The use of a contextual, modal and psychological classification of medication errors in the emergency department: a retrospective descriptive study.

    PubMed

    Cabilan, C J; Hughes, James A; Shannon, Carl

    2017-12-01

    To describe the contextual, modal and psychological classification of medication errors in the emergency department to know the factors associated with the reported medication errors. The causes of medication errors are unique in every clinical setting; hence, error minimisation strategies are not always effective. For this reason, it is fundamental to understand the causes specific to the emergency department so that targeted strategies can be implemented. Retrospective analysis of reported medication errors in the emergency department. All voluntarily staff-reported medication-related incidents from 2010-2015 from the hospital's electronic incident management system were retrieved for analysis. Contextual classification involved the time, place and the type of medications involved. Modal classification pertained to the stage and issue (e.g. wrong medication, wrong patient). Psychological classification categorised the errors in planning (knowledge-based and rule-based errors) and skill (slips and lapses). There were 405 errors reported. Most errors occurred in the acute care area, short-stay unit and resuscitation area, during the busiest shifts (0800-1559, 1600-2259). Half of the errors involved high-alert medications. Many of the errors occurred during administration (62·7%), prescribing (28·6%) and commonly during both stages (18·5%). Wrong dose, wrong medication and omission were the issues that dominated. Knowledge-based errors characterised the errors that occurred in prescribing and administration. The highest proportion of slips (79·5%) and lapses (76·1%) occurred during medication administration. It is likely that some of the errors occurred due to the lack of adherence to safety protocols. Technology such as computerised prescribing, barcode medication administration and reminder systems could potentially decrease the medication errors in the emergency department. There was a possibility that some of the errors could be prevented if safety protocols were adhered to, which highlights the need to also address clinicians' attitudes towards safety. Technology can be implemented to help minimise errors in the ED, but this must be coupled with efforts to enhance the culture of safety. © 2017 John Wiley & Sons Ltd.

  18. Errors introduced by dose scaling for relative dosimetry

    PubMed Central

    Watanabe, Yoichi; Hayashi, Naoki

    2012-01-01

    Some dosimeters require a relationship between detector signal and delivered dose. The relationship (characteristic curve or calibration equation) usually depends on the environment under which the dosimeters are manufactured or stored. To compensate for the difference in radiation response among different batches of dosimeters, the measured dose can be scaled by normalizing the measured dose to a specific dose. Such a procedure, often called “relative dosimetry”, allows us to skip the time‐consuming production of a calibration curve for each irradiation. In this study, the magnitudes of errors due to the dose scaling procedure were evaluated by using the characteristic curves of BANG3 polymer gel dosimeter, radiographic EDR2 films, and GAFCHROMIC EBT2 films. Several sets of calibration data were obtained for each type of dosimeters, and a calibration equation of one set of data was used to estimate doses of the other dosimeters from different batches. The scaled doses were then compared with expected doses, which were obtained by using the true calibration equation specific to each batch. In general, the magnitude of errors increased with increasing deviation of the dose scaling factor from unity. Also, the errors strongly depended on the difference in the shape of the true and reference calibration curves. For example, for the BANG3 polymer gel, of which the characteristic curve can be approximated with a linear equation, the error for a batch requiring a dose scaling factor of 0.87 was larger than the errors for other batches requiring smaller magnitudes of dose scaling, or scaling factors of 0.93 or 1.02. The characteristic curves of EDR2 and EBT2 films required nonlinear equations. With those dosimeters, errors larger than 5% were commonly observed in the dose ranges of below 50% and above 150% of the normalization dose. In conclusion, the dose scaling for relative dosimetry introduces large errors in the measured doses when a large dose scaling is applied, and this procedure should be applied with special care. PACS numbers: 87.56.Da, 06.20.Dk, 06.20.fb PMID:22955658

  19. Dissociation in Optokinetic Stimulation Sensitivity between Omission and Substitution Reading Errors in Neglect Dyslexia.

    PubMed

    Daini, Roberta; Albonico, Andrea; Malaspina, Manuela; Martelli, Marialuisa; Primativo, Silvia; Arduino, Lisa S

    2013-01-01

    Although omission and substitution errors in neglect dyslexia (ND) patients have always been considered as different manifestations of the same acquired reading disorder, recently, we proposed a new dual mechanism model. While omissions are related to the exploratory disorder which characterizes unilateral spatial neglect (USN), substitutions are due to a perceptual integration mechanism. A consequence of this hypothesis is that specific training for omission-type ND patients would aim at restoring the oculo-motor scanning and should not improve reading in substitution-type ND. With this aim we administered an optokinetic stimulation (OKS) to two brain-damaged patients with both USN and ND, MA and EP, who showed ND mainly characterized by omissions and substitutions, respectively. MA also showed an impairment in oculo-motor behavior with a non-reading task, while EP did not. The two patients presented a dissociation with respect to their sensitivity to OKS, so that, as expected, MA was positively affected, while EP was not. Our results confirm a dissociation between the two mechanisms underlying omission and substitution reading errors in ND patients. Moreover, they suggest that such a dissociation could possibly be extended to the effectiveness of rehabilitative procedures, and that patients who mainly omit contralesional-sided letters would benefit from OKS.

  20. Electrostatically Embedded Many-Body Approximation for Systems of Water, Ammonia, and Sulfuric Acid and the Dependence of Its Performance on Embedding Charges.

    PubMed

    Leverentz, Hannah R; Truhlar, Donald G

    2009-06-09

    This work tests the capability of the electrostatically embedded many-body (EE-MB) method to calculate accurate (relative to conventional calculations carried out at the same level of electronic structure theory and with the same basis set) binding energies of mixed clusters (as large as 9-mers) consisting of water, ammonia, sulfuric acid, and ammonium and bisulfate ions. This work also investigates the dependence of the accuracy of the EE-MB approximation on the type and origin of the charges used for electrostatically embedding these clusters. The conclusions reached are that for all of the clusters and sets of embedding charges studied in this work, the electrostatically embedded three-body (EE-3B) approximation is capable of consistently yielding relative errors of less than 1% and an average relative absolute error of only 0.3%, and that the performance of the EE-MB approximation does not depend strongly on the specific set of embedding charges used. The electrostatically embedded pairwise approximation has errors about an order of magnitude larger than EE-3B. This study also explores the question of why the accuracy of the EE-MB approximation shows such little dependence on the types of embedding charges employed.

  1. Dissociation in Optokinetic Stimulation Sensitivity between Omission and Substitution Reading Errors in Neglect Dyslexia

    PubMed Central

    Daini, Roberta; Albonico, Andrea; Malaspina, Manuela; Martelli, Marialuisa; Primativo, Silvia; Arduino, Lisa S.

    2013-01-01

    Although omission and substitution errors in neglect dyslexia (ND) patients have always been considered as different manifestations of the same acquired reading disorder, recently, we proposed a new dual mechanism model. While omissions are related to the exploratory disorder which characterizes unilateral spatial neglect (USN), substitutions are due to a perceptual integration mechanism. A consequence of this hypothesis is that specific training for omission-type ND patients would aim at restoring the oculo-motor scanning and should not improve reading in substitution-type ND. With this aim we administered an optokinetic stimulation (OKS) to two brain-damaged patients with both USN and ND, MA and EP, who showed ND mainly characterized by omissions and substitutions, respectively. MA also showed an impairment in oculo-motor behavior with a non-reading task, while EP did not. The two patients presented a dissociation with respect to their sensitivity to OKS, so that, as expected, MA was positively affected, while EP was not. Our results confirm a dissociation between the two mechanisms underlying omission and substitution reading errors in ND patients. Moreover, they suggest that such a dissociation could possibly be extended to the effectiveness of rehabilitative procedures, and that patients who mainly omit contralesional-sided letters would benefit from OKS. PMID:24062678

  2. Sensitivity and specificity of dosing alerts for dosing errors among hospitalized pediatric patients

    PubMed Central

    Stultz, Jeremy S; Porter, Kyle; Nahata, Milap C

    2014-01-01

    Objectives To determine the sensitivity and specificity of a dosing alert system for dosing errors and to compare the sensitivity of a proprietary system with and without institutional customization at a pediatric hospital. Methods A retrospective analysis of medication orders, orders causing dosing alerts, reported adverse drug events, and dosing errors during July, 2011 was conducted. Dosing errors with and without alerts were identified and the sensitivity of the system with and without customization was compared. Results There were 47 181 inpatient pediatric orders during the studied period; 257 dosing errors were identified (0.54%). The sensitivity of the system for identifying dosing errors was 54.1% (95% CI 47.8% to 60.3%) if customization had not occurred and increased to 60.3% (CI 54.0% to 66.3%) with customization (p=0.02). The sensitivity of the system for underdoses was 49.6% without customization and 60.3% with customization (p=0.01). Specificity of the customized system for dosing errors was 96.2% (CI 96.0% to 96.3%) with a positive predictive value of 8.0% (CI 6.8% to 9.3). All dosing errors had an alert over-ridden by the prescriber and 40.6% of dosing errors with alerts were administered to the patient. The lack of indication-specific dose ranges was the most common reason why an alert did not occur for a dosing error. Discussion Advances in dosing alert systems should aim to improve the sensitivity and positive predictive value of the system for dosing errors. Conclusions The dosing alert system had a low sensitivity and positive predictive value for dosing errors, but might have prevented dosing errors from reaching patients. Customization increased the sensitivity of the system for dosing errors. PMID:24496386

  3. Antecedents of willingness to report medical treatment errors in health care organizations: a multilevel theoretical framework.

    PubMed

    Naveh, Eitan; Katz-Navon, Tal

    2014-01-01

    To avoid errors and improve patient safety and quality of care, health care organizations need to identify the sources of failures and facilitate implementation of corrective actions. Hence, health care organizations try to collect reports and data about errors by investing enormous resources in reporting systems. However, despite health care organizations' declared goal of increasing the voluntary reporting of errors and although the Patient Safety and Quality Improvement Act of 2005 (S.544, Public Law 109-41) legalizes efforts to secure reporters from specific liabilities, the problem of underreporting of adverse events by staff members remains. The purpose of the paper is to develop a theory-based model and a set of propositions to understand the antecedents of staff members' willingness to report errors based on a literature synthesis. The model aims to explore a complex system of considerations employees use when deciding whether to report their errors or be silent about them. The model integrates the influences of three types of organizational climates (psychological safety, psychological contracts, and safety climate) and individual perceptions of the applicability of the organization's procedures and proposes their mutual influence on willingness to report errors and, as a consequence, patient safety. The model suggests that managers should try to control and influence both the way employees perceive procedure applicability and organizational context-i.e., psychological safety, no-blame contracts, and safety climate-to increase reporting and improve patient safety.

  4. Spelling Errors of Dyslexic Children in Bosnian Language with Transparent Orthography

    ERIC Educational Resources Information Center

    Duranovic, Mirela

    2017-01-01

    The purpose of this study was to explore the nature of spelling errors made by children with dyslexia in Bosnian language with transparent orthography. Three main error categories were distinguished: phonological, orthographic, and grammatical errors. An analysis of error type showed 86% of phonological errors, 10% of orthographic errors, and 4%…

  5. Fault detection and isolation in motion monitoring system.

    PubMed

    Kim, Duk-Jin; Suk, Myoung Hoon; Prabhakaran, B

    2012-01-01

    Pervasive computing becomes very active research field these days. A watch that can trace human movement to record motion boundary as well as to study of finding social life pattern by one's localized visiting area. Pervasive computing also helps patient monitoring. A daily monitoring system helps longitudinal study of patient monitoring such as Alzheimer's and Parkinson's or obesity monitoring. Due to the nature of monitoring sensor (on-body wireless sensor), however, signal noise or faulty sensors errors can be present at any time. Many research works have addressed these problems any with a large amount of sensor deployment. In this paper, we present the faulty sensor detection and isolation using only two on-body sensors. We have been investigating three different types of sensor errors: the SHORT error, the CONSTANT error, and the NOISY SENSOR error (see more details on section V). Our experimental results show that the success rate of isolating faulty signals are an average of over 91.5% on fault type 1, over 92% on fault type 2, and over 99% on fault type 3 with the fault prior of 30% sensor errors.

  6. Resampling-Based Empirical Bayes Multiple Testing Procedures for Controlling Generalized Tail Probability and Expected Value Error Rates: Focus on the False Discovery Rate and Simulation Study

    PubMed Central

    Dudoit, Sandrine; Gilbert, Houston N.; van der Laan, Mark J.

    2014-01-01

    Summary This article proposes resampling-based empirical Bayes multiple testing procedures for controlling a broad class of Type I error rates, defined as generalized tail probability (gTP) error rates, gTP(q, g) = Pr(g(Vn, Sn) > q), and generalized expected value (gEV) error rates, gEV(g) = E[g(Vn, Sn)], for arbitrary functions g(Vn, Sn) of the numbers of false positives Vn and true positives Sn. Of particular interest are error rates based on the proportion g(Vn, Sn) = Vn/(Vn + Sn) of Type I errors among the rejected hypotheses, such as the false discovery rate (FDR), FDR = E[Vn/(Vn + Sn)]. The proposed procedures offer several advantages over existing methods. They provide Type I error control for general data generating distributions, with arbitrary dependence structures among variables. Gains in power are achieved by deriving rejection regions based on guessed sets of true null hypotheses and null test statistics randomly sampled from joint distributions that account for the dependence structure of the data. The Type I error and power properties of an FDR-controlling version of the resampling-based empirical Bayes approach are investigated and compared to those of widely-used FDR-controlling linear step-up procedures in a simulation study. The Type I error and power trade-off achieved by the empirical Bayes procedures under a variety of testing scenarios allows this approach to be competitive with or outperform the Storey and Tibshirani (2003) linear step-up procedure, as an alternative to the classical Benjamini and Hochberg (1995) procedure. PMID:18932138

  7. A constrained-gradient method to control divergence errors in numerical MHD

    NASA Astrophysics Data System (ADS)

    Hopkins, Philip F.

    2016-10-01

    In numerical magnetohydrodynamics (MHD), a major challenge is maintaining nabla \\cdot {B}=0. Constrained transport (CT) schemes achieve this but have been restricted to specific methods. For more general (meshless, moving-mesh, ALE) methods, `divergence-cleaning' schemes reduce the nabla \\cdot {B} errors; however they can still be significant and can lead to systematic errors which converge away slowly. We propose a new constrained gradient (CG) scheme which augments these with a projection step, and can be applied to any numerical scheme with a reconstruction. This iteratively approximates the least-squares minimizing, globally divergence-free reconstruction of the fluid. Unlike `locally divergence free' methods, this actually minimizes the numerically unstable nabla \\cdot {B} terms, without affecting the convergence order of the method. We implement this in the mesh-free code GIZMO and compare various test problems. Compared to cleaning schemes, our CG method reduces the maximum nabla \\cdot {B} errors by ˜1-3 orders of magnitude (˜2-5 dex below typical errors if no nabla \\cdot {B} cleaning is used). By preventing large nabla \\cdot {B} at discontinuities, this eliminates systematic errors at jumps. Our CG results are comparable to CT methods; for practical purposes, the nabla \\cdot {B} errors are eliminated. The cost is modest, ˜30 per cent of the hydro algorithm, and the CG correction can be implemented in a range of numerical MHD methods. While for many problems, we find Dedner-type cleaning schemes are sufficient for good results, we identify a range of problems where using only Powell or `8-wave' cleaning can produce order-of-magnitude errors.

  8. Adjustment of Measurements with Multiplicative Errors: Error Analysis, Estimates of the Variance of Unit Weight, and Effect on Volume Estimation from LiDAR-Type Digital Elevation Models

    PubMed Central

    Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan

    2014-01-01

    Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880

  9. Causal Evidence from Humans for the Role of Mediodorsal Nucleus of the Thalamus in Working Memory.

    PubMed

    Peräkylä, Jari; Sun, Lihua; Lehtimäki, Kai; Peltola, Jukka; Öhman, Juha; Möttönen, Timo; Ogawa, Keith H; Hartikainen, Kaisa M

    2017-12-01

    The mediodorsal nucleus of the thalamus (MD), with its extensive connections to the lateral pFC, has been implicated in human working memory and executive functions. However, this understanding is based solely on indirect evidence from human lesion and imaging studies and animal studies. Direct, causal evidence from humans is missing. To obtain direct evidence for MD's role in humans, we studied patients treated with deep brain stimulation (DBS) for refractory epilepsy. This treatment is thought to prevent the generalization of a seizure by disrupting the functioning of the patient's anterior nuclei of the thalamus (ANT) with high-frequency electric stimulation. This structure is located superior and anterior to MD, and when the DBS lead is implanted in ANT, tip contacts of the lead typically penetrate through ANT into the adjoining MD. To study the role of MD in human executive functions and working memory, we periodically disrupted and recovered MD's function with high-frequency electric stimulation using DBS contacts reaching MD while participants performed a cognitive task engaging several aspects of executive functions. We hypothesized that the efficacy of executive functions, specifically working memory, is impaired when the functioning of MD is perturbed by high-frequency stimulation. Eight participants treated with ANT-DBS for refractory epilepsy performed a computer-based test of executive functions while DBS was repeatedly switched ON and OFF at MD and at the control location (ANT). In comparison to stimulation of the control location, when MD was stimulated, participants committed 2.26 times more errors in general (total errors; OR = 2.26, 95% CI [1.69, 3.01]) and 2.86 times more working memory-related errors specifically (incorrect button presses; OR = 2.88, CI [1.95, 4.24]). Similarly, participants committed 1.81 more errors in general ( OR = 1.81, CI [1.45, 2.24]) and 2.08 times more working memory-related errors ( OR = 2.08, CI [1.57, 2.75]) in comparison to no stimulation condition. "Total errors" is a composite score consisting of basic error types and was mostly driven by working memory-related errors. The facts that MD and a control location, ANT, are only few millimeters away from each other and that their stimulation produces very different results highlight the location-specific effect of DBS rather than regionally unspecific general effect. In conclusion, disrupting and recovering MD's function with high-frequency electric stimulation modulated participants' online working memory performance providing causal, in vivo evidence from humans for the role of MD in human working memory.

  10. Tropical Timber Identification using Backpropagation Neural Network

    NASA Astrophysics Data System (ADS)

    Siregar, B.; Andayani, U.; Fatihah, N.; Hakim, L.; Fahmi, F.

    2017-01-01

    Each and every type of wood has different characteristics. Identifying the type of wood properly is important, especially for industries that need to know the type of timber specifically. However, it requires expertise in identifying the type of wood and only limited experts available. In addition, the manual identification even by experts is rather inefficient because it requires a lot of time and possibility of human errors. To overcome these problems, a digital image based method to identify the type of timber automatically is needed. In this study, backpropagation neural network is used as artificial intelligence component. Several stages were developed: a microscope image acquisition, pre-processing, feature extraction using gray level co-occurrence matrix and normalization of data extraction using decimal scaling features. The results showed that the proposed method was able to identify the timber with an accuracy of 94%.

  11. Outlier removal, sum scores, and the inflation of the Type I error rate in independent samples t tests: the power of alternatives and recommendations.

    PubMed

    Bakker, Marjan; Wicherts, Jelte M

    2014-09-01

    In psychology, outliers are often excluded before running an independent samples t test, and data are often nonnormal because of the use of sum scores based on tests and questionnaires. This article concerns the handling of outliers in the context of independent samples t tests applied to nonnormal sum scores. After reviewing common practice, we present results of simulations of artificial and actual psychological data, which show that the removal of outliers based on commonly used Z value thresholds severely increases the Type I error rate. We found Type I error rates of above 20% after removing outliers with a threshold value of Z = 2 in a short and difficult test. Inflations of Type I error rates are particularly severe when researchers are given the freedom to alter threshold values of Z after having seen the effects thereof on outcomes. We recommend the use of nonparametric Mann-Whitney-Wilcoxon tests or robust Yuen-Welch tests without removing outliers. These alternatives to independent samples t tests are found to have nominal Type I error rates with a minimal loss of power when no outliers are present in the data and to have nominal Type I error rates and good power when outliers are present. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  12. Error-Free Text Typing Performance of an Inductive Intra-Oral Tongue Computer Interface for Severely Disabled Individuals.

    PubMed

    Andreasen Struijk, Lotte N S; Bentsen, Bo; Gaihede, Michael; Lontis, Eugen R

    2017-11-01

    For severely paralyzed individuals, alternative computer interfaces are becoming increasingly essential for everyday life as social and vocational activities are facilitated by information technology and as the environment becomes more automatic and remotely controllable. Tongue computer interfaces have proven to be desirable by the users partly due to their high degree of aesthetic acceptability, but so far the mature systems have shown a relatively low error-free text typing efficiency. This paper evaluated the intra-oral inductive tongue computer interface (ITCI) in its intended use: Error-free text typing in a generally available text editing system, Word. Individuals with tetraplegia and able bodied individuals used the ITCI for typing using a MATLAB interface and for Word typing for 4 to 5 experimental days, and the results showed an average error-free text typing rate in Word of 11.6 correct characters/min across all participants and of 15.5 correct characters/min for participants familiar with tongue piercings. Improvements in typing rates between the sessions suggest that typing ratescan be improved further through long-term use of the ITCI.

  13. Optimum Array Processing for Detecting Binary Signals Corrupted by Directional Interference.

    DTIC Science & Technology

    1972-12-01

    specific cases. Two different series representations of a vector random process are discussed in Van Trees [3]. These two methods both require the... spaci ~ng d, etc.) its detection error represents a lower bound for the performance that might be obtained with other types of array processing (such...Middleton, Introduction to Statistical Communication Theory, New York: McGraw-Hill, 1960. 3. H.L. Van Trees , Detection, Estimation, and Modulation Theory

  14. A comparison of different statistical methods analyzing hypoglycemia data using bootstrap simulations.

    PubMed

    Jiang, Honghua; Ni, Xiao; Huster, William; Heilmann, Cory

    2015-01-01

    Hypoglycemia has long been recognized as a major barrier to achieving normoglycemia with intensive diabetic therapies. It is a common safety concern for the diabetes patients. Therefore, it is important to apply appropriate statistical methods when analyzing hypoglycemia data. Here, we carried out bootstrap simulations to investigate the performance of the four commonly used statistical models (Poisson, negative binomial, analysis of covariance [ANCOVA], and rank ANCOVA) based on the data from a diabetes clinical trial. Zero-inflated Poisson (ZIP) model and zero-inflated negative binomial (ZINB) model were also evaluated. Simulation results showed that Poisson model inflated type I error, while negative binomial model was overly conservative. However, after adjusting for dispersion, both Poisson and negative binomial models yielded slightly inflated type I errors, which were close to the nominal level and reasonable power. Reasonable control of type I error was associated with ANCOVA model. Rank ANCOVA model was associated with the greatest power and with reasonable control of type I error. Inflated type I error was observed with ZIP and ZINB models.

  15. Comparison of the efficacy and technical accuracy of different rectangular collimators for intraoral radiography.

    PubMed

    Zhang, Wenjian; Abramovitch, Kenneth; Thames, Walter; Leon, Inga-Lill K; Colosi, Dan C; Goren, Arthur D

    2009-07-01

    The objective of this study was to compare the operating efficiency and technical accuracy of 3 different rectangular collimators. A full-mouth intraoral radiographic series excluding central incisor views were taken on training manikins by 2 groups of undergraduate dental and dental hygiene students. Three types of rectangular collimator were used: Type I ("free-hand"), Type II (mechanical interlocking), and Type III (magnetic collimator). Eighteen students exposed one side of the manikin with a Type I collimator and the other side with a Type II. Another 15 students exposed the manikin with Type I and Type III respectively. Type I is currently used for teaching and patient care at our institution and was considered as the control to which both Types II and III were compared. The time necessary to perform the procedure, subjective user friendliness, and the number of technique errors (placement, projection, and cone cut errors) were assessed. The Student t test or signed rank test was used to determine statistical difference (P

  16. PREVALENCE OF REFRACTIVE ERRORS IN MADRASSA STUDENTS OF HARIPUR DISTRICT.

    PubMed

    Atta, Zoia; Arif, Abdus Salam; Ahmed, Iftikhar; Farooq, Umer

    2015-01-01

    Visual impairment due to refractive errors is one of the most common problems among school-age children and is the second leading cause of treatable blindness. The Right to Sight, a global initiative launched by a coalition of non-government organizations and the World Health Organization (WHO), aims to eliminate avoidable visual impairment and blindness at a global level. In order to achieve this goal it is important to know the prevalence of different refractive errors in a community. Children and teenagers are the most susceptible groups to be affected by refractive errors. So, this population needs to be screened for different types of refractive errors. The study was done with the objective to find the frequency of different types of refractive errors in students of madrassas between the ages of 5-20 years in Haripur. This cross sectional study was done with 300 students between ages of 5-20 years in Madrassas of Haripur. The students were screened for refractive errors and the types of the errors were noted. After screening for refractive errors-the glasses were prescribed to the students. Myopia being 52.6% was the most frequent refractive error in students, followed by hyperopia 28.4% and astigmatism 19%. This study showed that myopia is an important problem in madrassa population. Females and males are almost equally affected. Spectacle correction of refractive errors is the cheapest and easy solution of this problem.

  17. Headaches associated with refractive errors: myth or reality?

    PubMed

    Gil-Gouveia, R; Martins, I P

    2002-04-01

    Headache and refractive errors are very common conditions in the general population, and those with headache often attribute their pain to a visual problem. The International Headache Society (IHS) criteria for the classification of headache includes an entity of headache associated with refractive errors (HARE), but indicates that its importance is widely overestimated. To compare overall headache frequency and HARE frequency in healthy subjects with uncorrected or miscorrected refractive errors and a control group. We interviewed 105 individuals with uncorrected refractive errors and a control group of 71 subjects (with properly corrected or without refractive errors) regarding their headache history. We compared the occurrence of headache and its diagnosis in both groups and assessed its relation to their habits of visual effort and type of refractive errors. Headache frequency was similar in both subjects and controls. Headache associated with refractive errors was the only headache type significantly more common in subjects with refractive errors than in controls (6.7% versus 0%). It was associated with hyperopia and was unrelated to visual effort or to the severity of visual error. With adequate correction, 72.5% of the subjects with headache and refractive error reported improvement in their headaches, and 38% had complete remission of headache. Regardless of the type of headache present, headache frequency was significantly reduced in these subjects (t = 2.34, P =.02). Headache associated with refractive errors was rarely identified in individuals with refractive errors. In those with chronic headache, proper correction of refractive errors significantly improved headache complaints and did so primarily by decreasing the frequency of headache episodes.

  18. What errors do peer reviewers detect, and does training improve their ability to detect them?

    PubMed

    Schroter, Sara; Black, Nick; Evans, Stephen; Godlee, Fiona; Osorio, Lyda; Smith, Richard

    2008-10-01

    To analyse data from a trial and report the frequencies with which major and minor errors are detected at a general medical journal, the types of errors missed and the impact of training on error detection. 607 peer reviewers at the BMJ were randomized to two intervention groups receiving different types of training (face-to-face training or a self-taught package) and a control group. Each reviewer was sent the same three test papers over the study period, each of which had nine major and five minor methodological errors inserted. BMJ peer reviewers. The quality of review, assessed using a validated instrument, and the number and type of errors detected before and after training. The number of major errors detected varied over the three papers. The interventions had small effects. At baseline (Paper 1) reviewers found an average of 2.58 of the nine major errors, with no notable difference between the groups. The mean number of errors reported was similar for the second and third papers, 2.71 and 3.0, respectively. Biased randomization was the error detected most frequently in all three papers, with over 60% of reviewers rejecting the papers identifying this error. Reviewers who did not reject the papers found fewer errors and the proportion finding biased randomization was less than 40% for each paper. Editors should not assume that reviewers will detect most major errors, particularly those concerned with the context of study. Short training packages have only a slight impact on improving error detection.

  19. Energy-efficient quantum computing

    NASA Astrophysics Data System (ADS)

    Ikonen, Joni; Salmilehto, Juha; Möttönen, Mikko

    2017-04-01

    In the near future, one of the major challenges in the realization of large-scale quantum computers operating at low temperatures is the management of harmful heat loads owing to thermal conduction of cabling and dissipation at cryogenic components. This naturally raises the question that what are the fundamental limitations of energy consumption in scalable quantum computing. In this work, we derive the greatest lower bound for the gate error induced by a single application of a bosonic drive mode of given energy. Previously, such an error type has been considered to be inversely proportional to the total driving power, but we show that this limitation can be circumvented by introducing a qubit driving scheme which reuses and corrects drive pulses. Specifically, our method serves to reduce the average energy consumption per gate operation without increasing the average gate error. Thus our work shows that precise, scalable control of quantum systems can, in principle, be implemented without the introduction of excessive heat or decoherence.

  20. On a more rigorous gravity field processing for future LL-SST type gravity satellite missions

    NASA Astrophysics Data System (ADS)

    Daras, I.; Pail, R.; Murböck, M.

    2013-12-01

    In order to meet the augmenting demands of the user community concerning accuracies of temporal gravity field models, future gravity missions of low-low satellite-to-satellite tracking (LL-SST) type are planned to carry more precise sensors than their precedents. A breakthrough is planned with the improved LL-SST measurement link, where the traditional K-band microwave instrument of 1μm accuracy will be complemented by an inter-satellite ranging instrument of several nm accuracy. This study focuses on investigations concerning the potential performance of the new sensors and their impact in gravity field solutions. The processing methods for gravity field recovery have to meet the new sensor standards and be able to take full advantage of the new accuracies that they provide. We use full-scale simulations in a realistic environment to investigate whether the standard processing techniques suffice to fully exploit the new sensors standards. We achieve that by performing full numerical closed-loop simulations based on the Integral Equation approach. In our simulation scheme, we simulate dynamic orbits in a conventional tracking analysis to compute pseudo inter-satellite ranges or range-rates that serve as observables. Each part of the processing is validated separately with special emphasis on numerical errors and their impact in gravity field solutions. We demonstrate that processing with standard precision may be a limiting factor for taking full advantage of new generation sensors that future satellite missions will carry. Therefore we have created versions of our simulator with enhanced processing precision with primarily aim to minimize round-off system errors. Results using the enhanced precision show a big reduction of system errors that were present at the standard precision processing even for the error-free scenario, and reveal the improvements the new sensors will bring into the gravity field solutions. As a next step, we analyze the contribution of individual error sources to the system's error budget. More specifically we analyze sensor noise from the laser interferometer and the accelerometers, errors in the kinematic orbits and the background fields as well as temporal and spatial aliasing errors. We give special care on the assessment of error sources with stochastic behavior, such as the laser interferometer and the accelerometers, and their consistent stochastic modeling in frame of the adjustment process.

  1. Computerized pharmaceutical intervention to reduce reconciliation errors at hospital discharge in Spain: an interrupted time-series study.

    PubMed

    García-Molina Sáez, C; Urbieta Sanz, E; Madrigal de Torres, M; Vicente Vera, T; Pérez Cárceles, M D

    2016-04-01

    It is well known that medication reconciliation at discharge is a key strategy to ensure proper drug prescription and the effectiveness and safety of any treatment. Different types of interventions to reduce reconciliation errors at discharge have been tested, many of which are based on the use of electronic tools as they are useful to optimize the medication reconciliation process. However, not all countries are progressing at the same speed in this task and not all tools are equally effective. So it is important to collate updated country-specific data in order to identify possible strategies for improvement in each particular region. Our aim therefore was to analyse the effectiveness of a computerized pharmaceutical intervention to reduce reconciliation errors at discharge in Spain. A quasi-experimental interrupted time-series study was carried out in the cardio-pneumology unit of a general hospital from February to April 2013. The study consisted of three phases: pre-intervention, intervention and post-intervention, each involving 23 days of observations. At the intervention period, a pharmacist was included in the medical team and entered the patient's pre-admission medication in a computerized tool integrated into the electronic clinical history of the patient. The effectiveness was evaluated by the differences between the mean percentages of reconciliation errors in each period using a Mann-Whitney U test accompanied by Bonferroni correction, eliminating autocorrelation of the data by first using an ARIMA analysis. In addition, the types of error identified and their potential seriousness were analysed. A total of 321 patients (119, 105 and 97 in each phase, respectively) were included in the study. For the 3966 medicaments recorded, 1087 reconciliation errors were identified in 77·9% of the patients. The mean percentage of reconciliation errors per patient in the first period of the study was 42·18%, falling to 19·82% during the intervention period (P = 0·000). When the intervention was withdrawn, the mean percentage of reconciliation errors increased again to 27·72% (P = 0·008). The difference between the percentages of pre- and post-intervention periods was statistically significant (P = 0·000). Most reconciliation errors were due to omission (46·7%) or incomplete prescription (43·8%), and 35·3% of which could have caused harm to the patient. A computerized pharmaceutical intervention is shown to reduce reconciliation errors in the context of a high incidence of such errors. © 2016 John Wiley & Sons Ltd.

  2. Effects of Listening Conditions, Error Types, and Ensemble Textures on Error Detection Skills

    ERIC Educational Resources Information Center

    Waggoner, Dori T.

    2011-01-01

    This study was designed with three main purposes: (a) to investigate the effects of two listening conditions on error detection accuracy, (b) to compare error detection responses for rhythm errors and pitch errors, and (c) to examine the influences of texture on error detection accuracy. Undergraduate music education students (N = 18) listened to…

  3. Checklists in Neurosurgery to Decrease Preventable Medical Errors: A Review

    PubMed Central

    Enchev, Yavor

    2015-01-01

    Neurosurgery represents a zero tolerance environment for medical errors, especially preventable ones like all types of wrong site surgery, complications due to the incorrect positioning of patients for neurosurgical interventions and complications due to failure of the devices required for the specific procedure. Following the excellent and encouraging results of the safety checklists in intensive care medicine and in other surgical areas, the checklist was naturally introduced in neurosurgery. To date, the reported world experience with neurosurgical checklists is limited to 15 series with fewer than 20,000 cases in various neurosurgical areas. The purpose of this review was to study the reported neurosurgical checklists according to the following parameters: year of publication; country of origin; area of neurosurgery; type of neurosurgical procedure-elective or emergency; person in charge of the checklist completion; participants involved in completion; whether they prevented incorrect site surgery; whether they prevented complications due to incorrect positioning of the patients for neurosurgical interventions; whether they prevented complications due to failure of the devices required for the specific procedure; their specific aims; educational preparation and training; the time needed for checklist completion; study duration and phases; number of cases included; barriers to implementation; efforts to implementation; team appreciation; and safety outcomes. Based on this analysis, it could be concluded that neurosurgical checklists represent an efficient, reliable, cost-effective and time-saving tool for increasing patient safety and elevating the neurosurgeons’ self-confidence. Every neurosurgical department must develop its own neurosurgical checklist or adopt and modify an existing one according to its specific features and needs in an attempt to establish or develop its safety culture. The world, continental, regional and national neurosurgical societies could promote safety checklists and their benefits. PMID:26740891

  4. Graduate Students' Administration and Scoring Errors on the Woodcock-Johnson III Tests of Cognitive Abilities

    ERIC Educational Resources Information Center

    Ramos, Erica; Alfonso, Vincent C.; Schermerhorn, Susan M.

    2009-01-01

    The interpretation of cognitive test scores often leads to decisions concerning the diagnosis, educational placement, and types of interventions used for children. Therefore, it is important that practitioners administer and score cognitive tests without error. This study assesses the frequency and types of examiner errors that occur during the…

  5. Analyzing students’ errors on fractions in the number line

    NASA Astrophysics Data System (ADS)

    Widodo, S.; Ikhwanudin, T.

    2018-05-01

    The objectives of this study are to know the type of students’ errors when they deal with fractions on the number line. This study used qualitative with a descriptive method, and involved 31 sixth grade students at one of the primary schools in Purwakarta, Indonesia. The results of this study are as follow, there are four types of student’s errors: unit confusion, tick mark interpretation error, partitioning and un partitioning error, and estimation error. We recommend that teachers should: strengthen unit understanding to the students when studying fractions, make students understand about tick mark interpretation, remind student of the importance of partitioning and un-partitioning strategy and teaches effective estimation strategies.

  6. Errors analysis of problem solving using the Newman stage after applying cooperative learning of TTW type

    NASA Astrophysics Data System (ADS)

    Rr Chusnul, C.; Mardiyana, S., Dewi Retno

    2017-12-01

    Problem solving is the basis of mathematics learning. Problem solving teaches us to clarify an issue coherently in order to avoid misunderstanding information. Sometimes there may be mistakes in problem solving due to misunderstanding the issue, choosing a wrong concept or misapplied concept. The problem-solving test was carried out after students were given treatment on learning by using cooperative learning of TTW type. The purpose of this study was to elucidate student problem regarding to problem solving errors after learning by using cooperative learning of TTW type. Newman stages were used to identify problem solving errors in this study. The new research used a descriptive method to find out problem solving errors in students. The subject in this study were students of Vocational Senior High School (SMK) in 10th grade. Test and interview was conducted for data collection. Thus, the results of this study suggested problem solving errors in students after learning by using cooperative learning of TTW type for Newman stages.

  7. Typing Style and the Use of Different Sources of Information during Typing: An Investigation Using Self-Reports

    PubMed Central

    Rieger, Martina; Bart, Victoria K. E.

    2016-01-01

    We investigated to what extent different sources of information are used in typing on a computer keyboard. Using self-reports 10 finger typists and idiosyncratic typists estimated how much attention they pay to different sources of information during copy typing and free typing and how much they use them for error detection. 10 finger typists reported less attention to the keyboard and the fingers and more attention to the template and the screen than idiosyncratic typists. The groups did not differ in attention to touch/kinaesthesis in copy typing and free typing, but 10 finger typists reported more use of touch/kinaesthesis in error detection. This indicates that processing of tactile/kinaesthetic information may occur largely outside conscious control, as long as no errors occur. 10 finger typists reported more use of internal prediction of movement consequences for error detection than idiosyncratic typists, reflecting more precise internal models. Further in copy typing compared to free typing attention to the template is required, thus leaving less attentional capacity for other sources of information. Correlations showed that higher skilled typists, regardless of typing style, rely more on sources of information which are usually associated with 10 finger typing. One limitation of the study is that only self-reports were used. We conclude that typing task, typing proficiency, and typing style influence how attention is distributed during typing. PMID:28018256

  8. Typing Style and the Use of Different Sources of Information during Typing: An Investigation Using Self-Reports.

    PubMed

    Rieger, Martina; Bart, Victoria K E

    2016-01-01

    We investigated to what extent different sources of information are used in typing on a computer keyboard. Using self-reports 10 finger typists and idiosyncratic typists estimated how much attention they pay to different sources of information during copy typing and free typing and how much they use them for error detection. 10 finger typists reported less attention to the keyboard and the fingers and more attention to the template and the screen than idiosyncratic typists. The groups did not differ in attention to touch/kinaesthesis in copy typing and free typing, but 10 finger typists reported more use of touch/kinaesthesis in error detection. This indicates that processing of tactile/kinaesthetic information may occur largely outside conscious control, as long as no errors occur. 10 finger typists reported more use of internal prediction of movement consequences for error detection than idiosyncratic typists, reflecting more precise internal models. Further in copy typing compared to free typing attention to the template is required, thus leaving less attentional capacity for other sources of information. Correlations showed that higher skilled typists, regardless of typing style, rely more on sources of information which are usually associated with 10 finger typing. One limitation of the study is that only self-reports were used. We conclude that typing task, typing proficiency, and typing style influence how attention is distributed during typing.

  9. Battery Cell By-Pass Circuit

    NASA Technical Reports Server (NTRS)

    Mumaw, Susan J. (Inventor); Evers, Jeffrey (Inventor); Craig, Calvin L., Jr. (Inventor); Walker, Stuart D. (Inventor)

    2001-01-01

    The invention is a circuit and method of limiting the charging current voltage from a power supply net work applied to an individual cell of a plurality of cells making up a battery being charged in series. It is particularly designed for use with batteries that can be damaged by overcharging, such as Lithium-ion type batteries. In detail. the method includes the following steps: 1) sensing the actual voltage level of the individual cell; 2) comparing the actual voltage level of the individual cell with a reference value and providing an error signal representative thereof; and 3) by-passing the charging current around individual cell necessary to keep the individual cell voltage level generally equal a specific voltage level while continuing to charge the remaining cells. Preferably this is accomplished by by-passing the charging current around the individual cell if said actual voltage level is above the specific voltage level and allowing the charging current to the individual cell if the actual voltage level is equal or less than the specific voltage level. In the step of bypassing the charging current, the by-passed current is transferred at a proper voltage level to the power supply. The by-pass circuit a voltage comparison circuit is used to compare the actual voltage level of the individual cell with a reference value and to provide an error signal representative thereof. A third circuit, designed to be responsive to the error signal, is provided for maintaining the individual cell voltage level generally equal to the specific voltage level. Circuitry is provided in the third circuit for bypassing charging current around the individual cell if the actual voltage level is above the specific voltage level and transfers the excess charging current to the power supply net work. The circuitry also allows charging of the individual cell if the actual voltage level is equal or less than the specific voltage level.

  10. The sensitivity of patient specific IMRT QC to systematic MLC leaf bank offset errors.

    PubMed

    Rangel, Alejandra; Palte, Gesa; Dunscombe, Peter

    2010-07-01

    Patient specific IMRT QC is performed routinely in many clinics as a safeguard against errors and inaccuracies which may be introduced during the complex planning, data transfer, and delivery phases of this type of treatment. The purpose of this work is to evaluate the feasibility of detecting systematic errors in MLC leaf bank position with patient specific checks. 9 head and neck (H&N) and 14 prostate IMRT beams were delivered using MLC files containing systematic offsets (+/- 1 mm in two banks, +/- 0.5 mm in two banks, and 1 mm in one bank of leaves). The beams were measured using both MAPCHECK (Sun Nuclear Corp., Melbourne, FL) and the aS1000 electronic portal imaging device (Varian Medical Systems, Palo Alto, CA). Comparisons with calculated fields, without offsets, were made using commonly adopted criteria including absolute dose (AD) difference, relative dose difference, distance to agreement (DTA), and the gamma index. The criteria most sensitive to systematic leaf bank offsets were the 3% AD, 3 mm DTA for MAPCHECK and the gamma index with 2% AD and 2 mm DTA for the EPID. The criterion based on the relative dose measurements was the least sensitive to MLC offsets. More highly modulated fields, i.e., H&N, showed greater changes in the percentage of passing points due to systematic MLC inaccuracy than prostate fields. None of the techniques or criteria tested is sufficiently sensitive, with the population of IMRT fields, to detect a systematic MLC offset at a clinically significant level on an individual field. Patient specific QC cannot, therefore, substitute for routine QC of the MLC itself.

  11. A methodology for translating positional error into measures of attribute error, and combining the two error sources

    Treesearch

    Yohay Carmel; Curtis Flather; Denis Dean

    2006-01-01

    This paper summarizes our efforts to investigate the nature, behavior, and implications of positional error and attribute error in spatiotemporal datasets. Estimating the combined influence of these errors on map analysis has been hindered by the fact that these two error types are traditionally expressed in different units (distance units, and categorical units,...

  12. Automatic-repeat-request error control schemes

    NASA Technical Reports Server (NTRS)

    Lin, S.; Costello, D. J., Jr.; Miller, M. J.

    1983-01-01

    Error detection incorporated with automatic-repeat-request (ARQ) is widely used for error control in data communication systems. This method of error control is simple and provides high system reliability. If a properly chosen code is used for error detection, virtually error-free data transmission can be attained. Various types of ARQ and hybrid ARQ schemes, and error detection using linear block codes are surveyed.

  13. ERP correlates of error processing during performance on the Halstead Category Test.

    PubMed

    Santos, I M; Teixeira, A R; Tomé, A M; Pereira, A T; Rodrigues, P; Vagos, P; Costa, J; Carrito, M L; Oliveira, B; DeFilippis, N A; Silva, C F

    2016-08-01

    The Halstead Category Test (HCT) is a neuropsychological test that measures a person's ability to formulate and apply abstract principles. Performance must be adjusted based on feedback after each trial and errors are common until the underlying rules are discovered. Event-related potential (ERP) studies associated with the HCT are lacking. This paper demonstrates the use of a methodology inspired on Singular Spectrum Analysis (SSA) applied to EEG signals, to remove high amplitude ocular and movement artifacts during performance on the test. This filtering technique introduces no phase or latency distortions, with minimum loss of relevant EEG information. Importantly, the test was applied in its original clinical format, without introducing adaptations to ERP recordings. After signal treatment, the feedback-related negativity (FRN) wave, which is related to error-processing, was identified. This component peaked around 250ms, after feedback, in fronto-central electrodes. As expected, errors elicited more negative amplitudes than correct responses. Results are discussed in terms of the increased clinical potential that coupling ERP information with behavioral performance data can bring to the specificity of the HCT in diagnosing different types of impairment in frontal brain function. Copyright © 2016. Published by Elsevier B.V.

  14. A Framework for Performing V&V within Reuse-Based Software Engineering

    NASA Technical Reports Server (NTRS)

    Addy, Edward A.

    1996-01-01

    Verification and validation (V&V) is performed during application development for many systems, especially safety-critical and mission-critical systems. The V&V process is intended to discover errors, especially errors related to critical processing, as early as possible during the development process. Early discovery is important in order to minimize the cost and other impacts of correcting these errors. In order to provide early detection of errors, V&V is conducted in parallel with system development, often beginning with the concept phase. In reuse-based software engineering, however, decisions on the requirements, design and even implementation of domain assets can be made prior to beginning development of a specific system. In this case, V&V must be performed during domain engineering in order to have an impact on system development. This paper describes a framework for performing V&V within architecture-centric, reuse-based software engineering. This framework includes the activities of traditional application-level V&V, and extends these activities into domain engineering and into the transition between domain engineering and application engineering. The framework includes descriptions of the types of activities to be performed during each of the life-cycle phases, and provides motivation for the activities.

  15. Reproducibility of Illumina platform deep sequencing errors allows accurate determination of DNA barcodes in cells.

    PubMed

    Beltman, Joost B; Urbanus, Jos; Velds, Arno; van Rooij, Nienke; Rohr, Jan C; Naik, Shalin H; Schumacher, Ton N

    2016-04-02

    Next generation sequencing (NGS) of amplified DNA is a powerful tool to describe genetic heterogeneity within cell populations that can both be used to investigate the clonal structure of cell populations and to perform genetic lineage tracing. For applications in which both abundant and rare sequences are biologically relevant, the relatively high error rate of NGS techniques complicates data analysis, as it is difficult to distinguish rare true sequences from spurious sequences that are generated by PCR or sequencing errors. This issue, for instance, applies to cellular barcoding strategies that aim to follow the amount and type of offspring of single cells, by supplying these with unique heritable DNA tags. Here, we use genetic barcoding data from the Illumina HiSeq platform to show that straightforward read threshold-based filtering of data is typically insufficient to filter out spurious barcodes. Importantly, we demonstrate that specific sequencing errors occur at an approximately constant rate across different samples that are sequenced in parallel. We exploit this observation by developing a novel approach to filter out spurious sequences. Application of our new method demonstrates its value in the identification of true sequences amongst spurious sequences in biological data sets.

  16. Pattern recognition of native plant communities: Manitou Colorado test site

    NASA Technical Reports Server (NTRS)

    Driscoll, R. S.

    1972-01-01

    Optimum channel selection among 12 channels of multispectral scanner imagery identified six as providing the best information about 11 vegetation classes and two nonvegetation classes at the Manitou Experimental Forest. Intensive preprocessing of the scanner signals was required to eliminate a serious scan angle effect. Final processing of the normalized data provided acceptable recognition results of generalized plant community types. Serious errors occurred with attempts to classify specific community types within upland grassland areas. The consideration of the convex mixtures concept (effects of amounts of live plant cover, exposed soil, and plant litter cover on apparent scene radiances) significantly improved the classification of some of the grassland classes.

  17. Neyman-Pearson classification algorithms and NP receiver operating characteristics

    PubMed Central

    Tong, Xin; Feng, Yang; Li, Jingyi Jessica

    2018-01-01

    In many binary classification applications, such as disease diagnosis and spam detection, practitioners commonly face the need to limit type I error (that is, the conditional probability of misclassifying a class 0 observation as class 1) so that it remains below a desired threshold. To address this need, the Neyman-Pearson (NP) classification paradigm is a natural choice; it minimizes type II error (that is, the conditional probability of misclassifying a class 1 observation as class 0) while enforcing an upper bound, α, on the type I error. Despite its century-long history in hypothesis testing, the NP paradigm has not been well recognized and implemented in classification schemes. Common practices that directly limit the empirical type I error to no more than α do not satisfy the type I error control objective because the resulting classifiers are likely to have type I errors much larger than α, and the NP paradigm has not been properly implemented in practice. We develop the first umbrella algorithm that implements the NP paradigm for all scoring-type classification methods, such as logistic regression, support vector machines, and random forests. Powered by this algorithm, we propose a novel graphical tool for NP classification methods: NP receiver operating characteristic (NP-ROC) bands motivated by the popular ROC curves. NP-ROC bands will help choose α in a data-adaptive way and compare different NP classifiers. We demonstrate the use and properties of the NP umbrella algorithm and NP-ROC bands, available in the R package nproc, through simulation and real data studies. PMID:29423442

  18. Why Does a Method That Fails Continue To Be Used: The Answer

    PubMed Central

    Templeton, Alan R.

    2009-01-01

    It has been claimed that hundreds of researchers use nested clade phylogeographic analysis (NCPA) based on what the method promises rather than requiring objective validation of the method. The supposed failure of NCPA is based upon the argument that validating it by using positive controls ignored type I error, and that computer simulations have shown a high type I error. The first argument is factually incorrect: the previously published validation analysis fully accounted for both type I and type II errors. The simulations that indicate a 75% type I error rate have serious flaws and only evaluate outdated versions of NCPA. These outdated type I error rates fall precipitously when the 2003 version of single locus NCPA is used or when the 2002 multi-locus version of NCPA is used. It is shown that the treewise type I errors in single-locus NCPA can be corrected to the desired nominal level by a simple statistical procedure, and that multilocus NCPA reconstructs a simulated scenario used to discredit NCPA with 100% accuracy. Hence, NCPA is a not a failed method at all, but rather has been validated both by actual data and by simulated data in a manner that satisfies the published criteria given by its critics. The critics have come to different conclusions because they have focused on the pre-2002 versions of NCPA and have failed to take into account the extensive developments in NCPA since 2002. Hence, researchers can choose to use NCPA based upon objective critical validation that shows that NCPA delivers what it promises. PMID:19335340

  19. Neyman-Pearson classification algorithms and NP receiver operating characteristics.

    PubMed

    Tong, Xin; Feng, Yang; Li, Jingyi Jessica

    2018-02-01

    In many binary classification applications, such as disease diagnosis and spam detection, practitioners commonly face the need to limit type I error (that is, the conditional probability of misclassifying a class 0 observation as class 1) so that it remains below a desired threshold. To address this need, the Neyman-Pearson (NP) classification paradigm is a natural choice; it minimizes type II error (that is, the conditional probability of misclassifying a class 1 observation as class 0) while enforcing an upper bound, α, on the type I error. Despite its century-long history in hypothesis testing, the NP paradigm has not been well recognized and implemented in classification schemes. Common practices that directly limit the empirical type I error to no more than α do not satisfy the type I error control objective because the resulting classifiers are likely to have type I errors much larger than α, and the NP paradigm has not been properly implemented in practice. We develop the first umbrella algorithm that implements the NP paradigm for all scoring-type classification methods, such as logistic regression, support vector machines, and random forests. Powered by this algorithm, we propose a novel graphical tool for NP classification methods: NP receiver operating characteristic (NP-ROC) bands motivated by the popular ROC curves. NP-ROC bands will help choose α in a data-adaptive way and compare different NP classifiers. We demonstrate the use and properties of the NP umbrella algorithm and NP-ROC bands, available in the R package nproc, through simulation and real data studies.

  20. A critical analysis of the accuracy of several numerical techniques for combustion kinetic rate equations

    NASA Technical Reports Server (NTRS)

    Radhadrishnan, Krishnan

    1993-01-01

    A detailed analysis of the accuracy of several techniques recently developed for integrating stiff ordinary differential equations is presented. The techniques include two general-purpose codes EPISODE and LSODE developed for an arbitrary system of ordinary differential equations, and three specialized codes CHEMEQ, CREK1D, and GCKP4 developed specifically to solve chemical kinetic rate equations. The accuracy study is made by application of these codes to two practical combustion kinetics problems. Both problems describe adiabatic, homogeneous, gas-phase chemical reactions at constant pressure, and include all three combustion regimes: induction, heat release, and equilibration. To illustrate the error variation in the different combustion regimes the species are divided into three types (reactants, intermediates, and products), and error versus time plots are presented for each species type and the temperature. These plots show that CHEMEQ is the most accurate code during induction and early heat release. During late heat release and equilibration, however, the other codes are more accurate. A single global quantity, a mean integrated root-mean-square error, that measures the average error incurred in solving the complete problem is used to compare the accuracy of the codes. Among the codes examined, LSODE is the most accurate for solving chemical kinetics problems. It is also the most efficient code, in the sense that it requires the least computational work to attain a specified accuracy level. An important finding is that use of the algebraic enthalpy conservation equation to compute the temperature can be more accurate and efficient than integrating the temperature differential equation.

  1. Review of medication errors that are new or likely to occur more frequently with electronic medication management systems.

    PubMed

    Van de Vreede, Melita; McGrath, Anne; de Clifford, Jan

    2018-05-14

    Objective. The aim of the present study was to identify and quantify medication errors reportedly related to electronic medication management systems (eMMS) and those considered likely to occur more frequently with eMMS. This included developing a new classification system relevant to eMMS errors. Methods. Eight Victorian hospitals with eMMS participated in a retrospective audit of reported medication incidents from their incident reporting databases between May and July 2014. Site-appointed project officers submitted deidentified incidents they deemed new or likely to occur more frequently due to eMMS, together with the Incident Severity Rating (ISR). The authors reviewed and classified incidents. Results. There were 5826 medication-related incidents reported. In total, 93 (47 prescribing errors, 46 administration errors) were identified as new or potentially related to eMMS. Only one ISR2 (moderate) and no ISR1 (severe or death) errors were reported, so harm to patients in this 3-month period was minimal. The most commonly reported error types were 'human factors' and 'unfamiliarity or training' (70%) and 'cross-encounter or hybrid system errors' (22%). Conclusions. Although the results suggest that the errors reported were of low severity, organisations must remain vigilant to the risk of new errors and avoid the assumption that eMMS is the panacea to all medication error issues. What is known about the topic? eMMS have been shown to reduce some types of medication errors, but it has been reported that some new medication errors have been identified and some are likely to occur more frequently with eMMS. There are few published Australian studies that have reported on medication error types that are likely to occur more frequently with eMMS in more than one organisation and that include administration and prescribing errors. What does this paper add? This paper includes a new simple classification system for eMMS that is useful and outlines the most commonly reported incident types and can inform organisations and vendors on possible eMMS improvements. The paper suggests a new classification system for eMMS medication errors. What are the implications for practitioners? The results of the present study will highlight to organisations the need for ongoing review of system design, refinement of workflow issues, staff education and training and reporting and monitoring of errors.

  2. How common are cognitive errors in cases presented at emergency medicine resident morbidity and mortality conferences?

    PubMed

    Chu, David; Xiao, Jane; Shah, Payal; Todd, Brett

    2018-06-20

    Cognitive errors are a major contributor to medical error. Traditionally, medical errors at teaching hospitals are analyzed in morbidity and mortality (M&M) conferences. We aimed to describe the frequency of cognitive errors in relation to the occurrence of diagnostic and other error types, in cases presented at an emergency medicine (EM) resident M&M conference. We conducted a retrospective study of all cases presented at a suburban US EM residency monthly M&M conference from September 2011 to August 2016. Each case was reviewed using the electronic medical record (EMR) and notes from the M&M case by two EM physicians. Each case was categorized by type of primary medical error that occurred as described by Okafor et al. When a diagnostic error occurred, the case was reviewed for contributing cognitive and non-cognitive factors. Finally, when a cognitive error occurred, the case was classified into faulty knowledge, faulty data gathering or faulty synthesis, as described by Graber et al. Disagreements in error type were mediated by a third EM physician. A total of 87 M&M cases were reviewed; the two reviewers agreed on 73 cases, and 14 cases required mediation by a third reviewer. Forty-eight cases involved diagnostic errors, 47 of which were cognitive errors. Of these 47 cases, 38 involved faulty synthesis, 22 involved faulty data gathering and only 11 involved faulty knowledge. Twenty cases contained more than one type of cognitive error. Twenty-nine cases involved both a resident and an attending physician, while 17 cases involved only an attending physician. Twenty-one percent of the resident cases involved all three cognitive errors, while none of the attending cases involved all three. Forty-one percent of the resident cases and only 6% of the attending cases involved faulty knowledge. One hundred percent of the resident cases and 94% of the attending cases involved faulty synthesis. Our review of 87 EM M&M cases revealed that cognitive errors are commonly involved in cases presented, and that these errors are less likely due to deficient knowledge and more likely due to faulty synthesis. M&M conferences may therefore provide an excellent forum to discuss cognitive errors and how to reduce their occurrence.

  3. Analysis of Errors Committed by Physics Students in Secondary Schools in Ilorin Metropolis, Nigeria

    ERIC Educational Resources Information Center

    Omosewo, Esther Ore; Akanbi, Abdulrasaq Oladimeji

    2013-01-01

    The study attempt to find out the types of error committed and influence of gender on the type of error committed by senior secondary school physics students in metropolis. Six (6) schools were purposively chosen for the study. One hundred and fifty five students' scripts were randomly sampled for the study. Joint Mock physics essay questions…

  4. Error Types and Error Positions in Neglect Dyslexia: Comparative Analyses in Neglect Patients and Healthy Controls

    ERIC Educational Resources Information Center

    Weinzierl, Christiane; Kerkhoff, Georg; van Eimeren, Lucia; Keller, Ingo; Stenneken, Prisca

    2012-01-01

    Unilateral spatial neglect frequently involves a lateralised reading disorder, neglect dyslexia (ND). Reading of single words in ND is characterised by left-sided omissions and substitutions of letters. However, it is unclear whether the distribution of error types and positions within a word shows a unique pattern of ND when directly compared to…

  5. A large-area, spatially continuous assessment of land cover map error and its impact on downstream analyses.

    PubMed

    Estes, Lyndon; Chen, Peng; Debats, Stephanie; Evans, Tom; Ferreira, Stefanus; Kuemmerle, Tobias; Ragazzo, Gabrielle; Sheffield, Justin; Wolf, Adam; Wood, Eric; Caylor, Kelly

    2018-01-01

    Land cover maps increasingly underlie research into socioeconomic and environmental patterns and processes, including global change. It is known that map errors impact our understanding of these phenomena, but quantifying these impacts is difficult because many areas lack adequate reference data. We used a highly accurate, high-resolution map of South African cropland to assess (1) the magnitude of error in several current generation land cover maps, and (2) how these errors propagate in downstream studies. We first quantified pixel-wise errors in the cropland classes of four widely used land cover maps at resolutions ranging from 1 to 100 km, and then calculated errors in several representative "downstream" (map-based) analyses, including assessments of vegetative carbon stocks, evapotranspiration, crop production, and household food security. We also evaluated maps' spatial accuracy based on how precisely they could be used to locate specific landscape features. We found that cropland maps can have substantial biases and poor accuracy at all resolutions (e.g., at 1 km resolution, up to ∼45% underestimates of cropland (bias) and nearly 50% mean absolute error (MAE, describing accuracy); at 100 km, up to 15% underestimates and nearly 20% MAE). National-scale maps derived from higher-resolution imagery were most accurate, followed by multi-map fusion products. Constraining mapped values to match survey statistics may be effective at minimizing bias (provided the statistics are accurate). Errors in downstream analyses could be substantially amplified or muted, depending on the values ascribed to cropland-adjacent covers (e.g., with forest as adjacent cover, carbon map error was 200%-500% greater than in input cropland maps, but ∼40% less for sparse cover types). The average locational error was 6 km (600%). These findings provide deeper insight into the causes and potential consequences of land cover map error, and suggest several recommendations for land cover map users. © 2017 John Wiley & Sons Ltd.

  6. Hypothesis Testing Using Factor Score Regression

    PubMed Central

    Devlieger, Ines; Mayer, Axel; Rosseel, Yves

    2015-01-01

    In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and with structural equation modeling (SEM) by using analytic calculations and two Monte Carlo simulation studies to examine their finite sample characteristics. Several performance criteria are used, such as the bias using the unstandardized and standardized parameterization, efficiency, mean square error, standard error bias, type I error rate, and power. The results show that the bias correcting method, with the newly developed standard error, is the only suitable alternative for SEM. While it has a higher standard error bias than SEM, it has a comparable bias, efficiency, mean square error, power, and type I error rate. PMID:29795886

  7. Errors in fluid therapy in medical wards.

    PubMed

    Mousavi, Maryam; Khalili, Hossein; Dashti-Khavidaki, Simin

    2012-04-01

    Intravenous fluid therapy remains an essential part of patients' care during hospitalization. There are only few studies that focused on fluid therapy in the hospitalized patients, and there is not any consensus statement about fluid therapy in patients who are hospitalized in medical wards. The aim of the present study was to assess intravenous fluid therapy status and related errors in the patients during the course of hospitalization in the infectious diseases wards of a referral teaching hospital. This study was conducted in the infectious diseases wards of Imam Khomeini Complex Hospital, Tehran, Iran. During a retrospective study, data related to intravenous fluid therapy were collected by two clinical pharmacists of infectious diseases from 2008 to 2010. Intravenous fluid therapy information including indication, type, volume and rate of fluid administration was recorded for each patient. An internal protocol for intravenous fluid therapy was designed based on literature review and available recommendations. The data related to patients' fluid therapy were compared with this protocol. The fluid therapy was considered appropriate if it was compatible with the protocol regarding indication of intravenous fluid therapy, type, electrolyte content and rate of fluid administration. Any mistake in the selection of fluid type, content, volume and rate of administration was considered as intravenous fluid therapy errors. Five hundred and ninety-six of medication errors were detected during the study period in the patients. Overall rate of fluid therapy errors was 1.3 numbers per patient during hospitalization. Errors in the rate of fluid administration (29.8%), incorrect fluid volume calculation (26.5%) and incorrect type of fluid selection (24.6%) were the most common types of errors. The patients' male sex, old age, baseline renal diseases, diabetes co-morbidity, and hospitalization due to endocarditis, HIV infection and sepsis are predisposing factors for the occurrence of fluid therapy errors in the patients. Our result showed that intravenous fluid therapy errors occurred commonly in the hospitalized patients especially in the medical wards. Improvement in knowledge and attention of health-care workers about these errors are essential for preventing of medication errors in aspect of fluid therapy.

  8. Development of a Spindle Thermal Error Characterization and Compensation Sensor System for Machining Center Accuracy Enhancement

    DTIC Science & Technology

    1993-12-01

    When Government drawings, specifications or other data are used for any purpose other than in connection with a definitely related Government...in any manner licensing the holder or any other person or corporation, or conveying any rights or permission to manufacture, use , or sell any...Reduction Project(7408) WsigonDC 20503. 1. AGENCY USE ONLY (Leave SW*n) 2. REPORT DATE 3. REPORT TYPE AND DATES COVERED IDecember 1993 Final April

  9. Many tests of significance: new methods for controlling type I errors.

    PubMed

    Keselman, H J; Miller, Charles W; Holland, Burt

    2011-12-01

    There have been many discussions of how Type I errors should be controlled when many hypotheses are tested (e.g., all possible comparisons of means, correlations, proportions, the coefficients in hierarchical models, etc.). By and large, researchers have adopted familywise (FWER) control, though this practice certainly is not universal. Familywise control is intended to deal with the multiplicity issue of computing many tests of significance, yet such control is conservative--that is, less powerful--compared to per test/hypothesis control. The purpose of our article is to introduce the readership, particularly those readers familiar with issues related to controlling Type I errors when many tests of significance are computed, to newer methods that provide protection from the effects of multiple testing, yet are more powerful than familywise controlling methods. Specifically, we introduce a number of procedures that control the k-FWER. These methods--say, 2-FWER instead of 1-FWER (i.e., FWER)--are equivalent to specifying that the probability of 2 or more false rejections is controlled at .05, whereas FWER controls the probability of any (i.e., 1 or more) false rejections at .05. 2-FWER implicitly tolerates 1 false rejection and makes no explicit attempt to control the probability of its occurrence, unlike FWER, which tolerates no false rejections at all. More generally, k-FWER tolerates k - 1 false rejections, but controls the probability of k or more false rejections at α =.05. We demonstrate with two published data sets how more hypotheses can be rejected with k-FWER methods compared to FWER control.

  10. Comparison of gel column, card, and cartridge techniques for dog erythrocyte antigen 1.1 blood typing

    PubMed Central

    Seth, Mayank; Jackson, Karen V.; Winzelberg, Sarah; Giger, Urs

    2012-01-01

    Objective To compare accuracy and ease of use of a card agglutination assay, an immunochromatographic cartridge method, and a gel-based method for canine blood typing. Sample Blood samples from 52 healthy blood donor dogs, 10 dogs with immune-mediated hemolytic anemia (IMHA), and 29 dogs with other diseases. Procedures Blood samples were tested in accordance with manufacturer guidelines. Samples with low PCVs were created by the addition of autologous plasma to separately assess the effects of anemia on test results. Results Compared with a composite reference standard of agreement between 2 methods, the gel-based method was found to be 100% accurate. The card agglutination assay was 89% to 91% accurate, depending on test interpretation, and the immunochromatographic cartridge method was 93% accurate but 100% specific. Errors were observed more frequently in samples from diseased dogs, particularly those with IMHA. In the presence of persistent autoagglutination, dog erythrocyte antigen (DEA) 1.1 typing was not possible, except with the immunochromatographic cartridge method. Conclusions and Clinical Relevance The card agglutination assay and immunochromatographic cartridge method, performed by trained personnel, were suitable for in-clinic emergency DEA 1.1 blood typing. There may be errors, particularly for samples from dogs with IMHA, and the immunochromatographic cartridge method may have an advantage of allowing typing of samples with persistent autoagglutination. The laboratory gel-based method would be preferred for routine DEA 1.1 typing of donors and patients if it is available and time permits. Current DEA 1.1 typing techniques appear to be appropriately standardized and easy to use. PMID:22280380

  11. Powerful Inference with the D-Statistic on Low-Coverage Whole-Genome Data

    PubMed Central

    Soraggi, Samuele; Wiuf, Carsten; Albrechtsen, Anders

    2017-01-01

    The detection of ancient gene flow between human populations is an important issue in population genetics. A common tool for detecting ancient admixture events is the D-statistic. The D-statistic is based on the hypothesis of a genetic relationship that involves four populations, whose correctness is assessed by evaluating specific coincidences of alleles between the groups. When working with high-throughput sequencing data, calling genotypes accurately is not always possible; therefore, the D-statistic currently samples a single base from the reads of one individual per population. This implies ignoring much of the information in the data, an issue especially striking in the case of ancient genomes. We provide a significant improvement to overcome the problems of the D-statistic by considering all reads from multiple individuals in each population. We also apply type-specific error correction to combat the problems of sequencing errors, and show a way to correct for introgression from an external population that is not part of the supposed genetic relationship, and how this leads to an estimate of the admixture rate. We prove that the D-statistic is approximated by a standard normal distribution. Furthermore, we show that our method outperforms the traditional D-statistic in detecting admixtures. The power gain is most pronounced for low and medium sequencing depth (1–10×), and performances are as good as with perfectly called genotypes at a sequencing depth of 2×. We show the reliability of error correction in scenarios with simulated errors and ancient data, and correct for introgression in known scenarios to estimate the admixture rates. PMID:29196497

  12. Validation of temporal and spatial consistency of facility- and speed-specific vehicle-specific power distributions for emission estimation: A case study in Beijing, China.

    PubMed

    Zhai, Zhiqiang; Song, Guohua; Lu, Hongyu; He, Weinan; Yu, Lei

    2017-09-01

    Vehicle-specific power (VSP) has been found to be highly correlated with vehicle emissions. It is used in many studies on emission modeling such as the MOVES (Motor Vehicle Emissions Simulator) model. The existing studies develop specific VSP distributions (or OpMode distribution in MOVES) for different road types and various average speeds to represent the vehicle operating modes on road. However, it is still not clear if the facility- and speed-specific VSP distributions are consistent temporally and spatially. For instance, is it necessary to update periodically the database of the VSP distributions in the emission model? Are the VSP distributions developed in the city central business district (CBD) area applicable to its suburb area? In this context, this study examined the temporal and spatial consistency of the facility- and speed-specific VSP distributions in Beijing. The VSP distributions in different years and in different areas are developed, based on real-world vehicle activity data. The root mean square error (RMSE) is employed to quantify the difference between the VSP distributions. The maximum differences of the VSP distributions between different years and between different areas are approximately 20% of that between different road types. The analysis of the carbon dioxide (CO 2 ) emission factor indicates that the temporal and spatial differences of the VSP distributions have no significant impact on vehicle emission estimation, with relative error of less than 3%. The temporal and spatial differences have no significant impact on the development of the facility- and speed-specific VSP distributions for the vehicle emission estimation. The database of the specific VSP distributions in the VSP-based emission models can maintain in terms of time. Thus, it is unnecessary to update the database regularly, and it is reliable to use the history vehicle activity data to forecast the emissions in the future. In one city, the areas with less data can still develop accurate VSP distributions based on better data from other areas.

  13. Working memory load impairs the evaluation of behavioral errors in the medial frontal cortex.

    PubMed

    Maier, Martin E; Steinhauser, Marco

    2017-10-01

    Early error monitoring in the medial frontal cortex enables error detection and the evaluation of error significance, which helps prioritize adaptive control. This ability has been assumed to be independent from central capacity, a limited pool of resources assumed to be involved in cognitive control. The present study investigated whether error evaluation depends on central capacity by measuring the error-related negativity (Ne/ERN) in a flanker paradigm while working memory load was varied on two levels. We used a four-choice flanker paradigm in which participants had to classify targets while ignoring flankers. Errors could be due to responding either to the flankers (flanker errors) or to none of the stimulus elements (nonflanker errors). With low load, the Ne/ERN was larger for flanker errors than for nonflanker errors-an effect that has previously been interpreted as reflecting differential significance of these error types. With high load, no such effect of error type on the Ne/ERN was observable. Our findings suggest that working memory load does not impair the generation of an Ne/ERN per se but rather impairs the evaluation of error significance. They demonstrate that error monitoring is composed of capacity-dependent and capacity-independent mechanisms. © 2017 Society for Psychophysiological Research.

  14. Monte Carlo design and simulation of a grid-type multi-layer pixel collimator for radiotherapy: Feasibility study

    NASA Astrophysics Data System (ADS)

    Yoon, Do-Kun; Jung, Joo-Young; Suh, Tae Suk

    2014-05-01

    In order to confirm the possibility of field application of a different type collimator with a multileaf collimator (MLC), we constructed a grid-type multi-layer pixel collimator (GTPC) by using a Monte Carlo n-particle simulation (MCNPX). In this research, a number of factors related to the performance of the GPTC were evaluated using simulated output data of a basic MLC model. A layer was comprised of a 1024-pixel collimator (5.0 × 5.0 mm2) which could operate individually as a grid-type collimator (32 × 32). A 30-layer collimator was constructed for a specific portal form to pass radiation through the opening and closing of each pixel cover. The radiation attenuation level and the leakage were compared between the GTPC modality simulation and MLC modeling (tungsten, 17.50 g/cm3, 5.0 × 70.0 × 160.0 mm3) currently used for a radiation field. Comparisons of the portal imaging, the lateral dose profile from a virtual water phantom, the dependence of the performance on the increase in the number of layers, the radiation intensity modulation verification, and the geometric error between the GTPC and the MLC were done using the MCNPX simulation data. From the simulation data, the intensity modulation of the GTPC showed a faster response than the MLC's (29.6%). In addition, the agreement between the doses that should be delivered to the target region was measured as 97.0%, and the GTPC system had an error below 0.01%, which is identical to that of MLC. A Monte Carlo simulation of the GTPC could be useful for verification of application possibilities. Because the line artifact is caused by the grid frame and the folded cover, a lineal dose transfer type is chosen for the operation of this system. However, the result of GTPC's performance showed that the methods of effective intensity modulation and the specific geometric beam shaping differed with the MLC modality.

  15. Writing errors as a result of frontal dysfunction in Japanese patients with amyotrophic lateral sclerosis.

    PubMed

    Tsuji-Akimoto, Sachiko; Hamada, Shinsuke; Yabe, Ichiro; Tamura, Itaru; Otsuki, Mika; Kobashi, Syoji; Sasaki, Hidenao

    2010-12-01

    Loss of communication is a critical problem for advanced amyotrophic lateral sclerosis (ALS) patients. This loss of communication is mainly caused by severe dysarthria and disability of the dominant hand. However, reports show that about 50% of ALS patients have mild cognitive dysfunction, and there are a considerable number of case reports on Japanese ALS patients with agraphia. To clarify writing disabilities in non-demented ALS patients, eighteen non-demented ALS patients and 16 controls without neurological disorders were examined for frontal cognitive function and writing ability. To assess writing errors statistically, we scored them on their composition ability with the original writing error index (WEI). The ALS and control groups did not differ significantly with regard to age, years of education, or general cognitive level. Two patients could not write a letter because of disability of the dominant hand. The WEI and results of picture arrangement tests indicated significant impairment in the ALS patients. Auditory comprehension (Western Aphasia Battery; WAB IIC) and kanji dictation also showed mild impairment. Patients' writing errors consisted of both syntactic and letter-writing mistakes. Omission, substitution, displacement, and inappropriate placement of the phonic marks of kana were observed; these features have often been reported in Japanese patients with agraphia resulted from a frontal lobe lesion. The most frequent type of error was an omission of kana, the next most common was a missing subject. Writing errors might be a specific deficit for some non-demented ALS patients.

  16. Optimal Sampling to Provide User-Specific Climate Information.

    NASA Astrophysics Data System (ADS)

    Panturat, Suwanna

    The types of weather-related world problems which are of socio-economic importance selected in this study as representative of three different levels of user groups include: (i) a regional problem concerned with air pollution plumes which lead to acid rain in the north eastern United States, (ii) a state-level problem in the form of winter wheat production in Oklahoma, and (iii) an individual-level problem involving reservoir management given errors in rainfall estimation at Lake Ellsworth, upstream from Lawton, Oklahoma. The study is aimed at designing optimal sampling networks which are based on customer value systems and also abstracting from data sets that information which is most cost-effective in reducing the climate-sensitive aspects of a given user problem. Three process models being used in this study to interpret climate variability in terms of the variables of importance to the user comprise: (i) the HEFFTER-SAMSON diffusion model as the climate transfer function for acid rain, (ii) the CERES-MAIZE plant process model for winter wheat production and (iii) the AGEHYD streamflow model selected as "a black box" for reservoir management. A state-of-the-art Non Linear Program (NLP) algorithm for minimizing an objective function is employed to determine the optimal number and location of various sensors. Statistical quantities considered in determining sensor locations including Bayes Risk, the chi-squared value, the probability of the Type I error (alpha) and the probability of the Type II error (beta) and the noncentrality parameter delta^2. Moreover, the number of years required to detect a climate change resulting in a given bushel per acre change in mean wheat production is determined; the number of seasons of observations required to reduce the standard deviation of the error variance of the ambient sulfur dioxide to less than a certain percent of the mean is found; and finally the policy of maintaining pre-storm flood pools at selected levels is examined given information from the optimal sampling network as defined by the study.

  17. Temporal lobe stimulation reveals anatomic distinction between auditory naming processes.

    PubMed

    Hamberger, M J; Seidel, W T; Goodman, R R; Perrine, K; McKhann, G M

    2003-05-13

    Language errors induced by cortical stimulation can provide insight into function(s) supported by the area stimulated. The authors observed that some stimulation-induced errors during auditory description naming were characterized by tip-of-the-tongue responses or paraphasic errors, suggesting expressive difficulty, whereas others were qualitatively different, suggesting receptive difficulty. They hypothesized that these two response types reflected disruption at different stages of auditory verbal processing and that these "subprocesses" might be supported by anatomically distinct cortical areas. To explore the topographic distribution of error types in auditory verbal processing. Twenty-one patients requiring left temporal lobe surgery underwent preresection language mapping using direct cortical stimulation. Auditory naming was tested at temporal sites extending from 1 cm from the anterior tip to the parietal operculum. Errors were dichotomized as either "expressive" or "receptive." The topographic distribution of error types was explored. Sites associated with the two error types were topographically distinct from one another. Most receptive sites were located in the middle portion of the superior temporal gyrus (STG), whereas most expressive sites fell outside this region, scattered along lateral temporal and temporoparietal cortex. Results raise clinical questions regarding the inclusion of the STG in temporal lobe epilepsy surgery and suggest that more detailed cortical mapping might enable better prediction of postoperative language decline. From a theoretical perspective, results carry implications regarding the understanding of structure-function relations underlying temporal lobe mediation of auditory language processing.

  18. Errors in accident data, its types, causes and methods of rectification-analysis of the literature.

    PubMed

    Ahmed, Ashar; Sadullah, Ahmad Farhan Mohd; Yahya, Ahmad Shukri

    2017-07-29

    Most of the decisions taken to improve road safety are based on accident data, which makes it the back bone of any country's road safety system. Errors in this data will lead to misidentification of black spots and hazardous road segments, projection of false estimates pertinent to accidents and fatality rates, and detection of wrong parameters responsible for accident occurrence, thereby making the entire road safety exercise ineffective. Its extent varies from country to country depending upon various factors. Knowing the type of error in the accident data and the factors causing it enables the application of the correct method for its rectification. Therefore there is a need for a systematic literature review that addresses the topic at a global level. This paper fulfils the above research gap by providing a synthesis of literature for the different types of errors found in the accident data of 46 countries across the six regions of the world. The errors are classified and discussed with respect to each type and analysed with respect to income level; assessment with regard to the magnitude for each type is provided; followed by the different causes that result in their occurrence, and the various methods used to address each type of error. Among high-income countries the extent of error in reporting slight, severe, non-fatal and fatal injury accidents varied between 39-82%, 16-52%, 12-84%, and 0-31% respectively. For middle-income countries the error for the same categories varied between 93-98%, 32.5-96%, 34-99% and 0.5-89.5% respectively. The only four studies available for low-income countries showed that the error in reporting non-fatal and fatal accidents varied between 69-80% and 0-61% respectively. The logistic relation of error in accident data reporting, dichotomised at 50%, indicated that as the income level of a country increases the probability of having less error in accident data also increases. Average error in recording information related to the variables in the categories of location, victim's information, vehicle's information, and environment was 27%, 37%, 16% and 19% respectively. Among the causes identified for errors in accident data reporting, Policing System was found to be the most important. Overall 26 causes of errors in accident data were discussed out of which 12 were related to reporting and 14 were related to recording. "Capture-Recapture" was the most widely used method among the 11 different methods: that can be used for the rectification of under-reporting. There were 12 studies pertinent to the rectification of accident location and almost all of them utilised a Geographical Information System (GIS) platform coupled with a matching algorithm to estimate the correct location. It is recommended that the policing system should be reformed and public awareness should be created to help reduce errors in accident data. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Conversion of radius of curvature to power (and vice versa)

    NASA Astrophysics Data System (ADS)

    Wickenhagen, Sven; Endo, Kazumasa; Fuchs, Ulrike; Youngworth, Richard N.; Kiontke, Sven R.

    2015-09-01

    Manufacturing optical components relies on good measurements and specifications. One of the most precise measurements routinely required is the form accuracy. In practice, form deviation from the ideal surface is effectively low frequency errors, where the form error most often accounts for no more than a few undulations across a surface. These types of errors are measured in a variety of ways including interferometry and tactile methods like profilometry, with the latter often being employed for aspheres and general surface shapes such as freeforms. This paper provides a basis for a correct description of power and radius of curvature tolerances, including best practices and calculating the power value with respect to the radius deviation (and vice versa) of the surface form. A consistent definition of the sagitta is presented, along with different cases in manufacturing that are of interest to fabricators and designers. The results make clear how the definitions and results should be documented, for all measurement setups. Relationships between power and radius of curvature are shown that allow specifying the preferred metric based on final accuracy and measurement method. Results shown include all necessary equations for conversion to give optical designers and manufacturers a consistent and robust basis for decision-making. The paper also gives guidance on preferred methods for different scenarios for surface types, accuracy required, and metrology methods employed.

  20. Fluorescence errors in integrating sphere measurements of remote phosphor type LED light sources

    NASA Astrophysics Data System (ADS)

    Keppens, A.; Zong, Y.; Podobedov, V. B.; Nadal, M. E.; Hanselaer, P.; Ohno, Y.

    2011-05-01

    The relative spectral radiant flux error caused by phosphor fluorescence during integrating sphere measurements is investigated both theoretically and experimentally. Integrating sphere and goniophotometer measurements are compared and used for model validation, while a case study provides additional clarification. Criteria for reducing fluorescence errors to a degree of negligibility as well as a fluorescence error correction method based on simple matrix algebra are presented. Only remote phosphor type LED light sources are studied because of their large phosphor surfaces and high application potential in general lighting.

  1. Judgment of Line Orientation Depends on Gender, Education, and Type of Error

    ERIC Educational Resources Information Center

    Caparelli-Daquer, Egas M.; Oliveira-Souza, Ricardo; Filho, Pedro F. Moreira

    2009-01-01

    Visuospatial tasks are particularly proficient at eliciting gender differences during neuropsychological performance. Here we tested the hypothesis that gender and education are related to different types of visuospatial errors on a task of line orientation that allowed the independent scoring of correct responses ("hits", or H) and one type of…

  2. Influence of Lexical Factors on Word-Finding Accuracy, Error Patterns, and Substitution Types

    ERIC Educational Resources Information Center

    Newman, Rochelle S.; German, Diane J.; Jagielko, Jennifer R.

    2018-01-01

    This retrospective, exploratory investigation examined the types of target words that 66 children with/without word-finding difficulties (WFD) had difficulty naming, and the types of errors they made. Words were studied with reference to lexical factors (LFs) that might influence naming performance: word frequency, familiarity, length, phonotactic…

  3. Performing a reaching task with one arm while adapting to a visuomotor rotation with the other can lead to complete transfer of motor learning across the arms

    PubMed Central

    Lei, Yuming; Binder, Jeffrey R.

    2015-01-01

    The extent to which motor learning is generalized across the limbs is typically very limited. Here, we investigated how two motor learning hypotheses could be used to enhance the extent of interlimb transfer. According to one hypothesis, we predicted that reinforcement of successful actions by providing binary error feedback regarding task success or failure, in addition to terminal error feedback, during initial training would increase the extent of interlimb transfer following visuomotor adaptation (experiment 1). According to the other hypothesis, we predicted that performing a reaching task repeatedly with one arm without providing performance feedback (which prevented learning the task with this arm), while concurrently adapting to a visuomotor rotation with the other arm, would increase the extent of transfer (experiment 2). Results indicate that providing binary error feedback, compared with continuous visual feedback that provided movement direction and amplitude information, had no influence on the extent of transfer. In contrast, repeatedly performing (but not learning) a specific task with one arm while visuomotor adaptation occurred with the other arm led to nearly complete transfer. This suggests that the absence of motor instances associated with specific effectors and task conditions is the major reason for limited interlimb transfer and that reinforcement of successful actions during initial training is not beneficial for interlimb transfer. These findings indicate crucial contributions of effector- and task-specific motor instances, which are thought to underlie (a type of) model-free learning, to optimal motor learning and interlimb transfer. PMID:25632082

  4. Regression models to predict hip joint centers in pathological hip population.

    PubMed

    Mantovani, Giulia; Ng, K C Geoffrey; Lamontagne, Mario

    2016-02-01

    The purpose was to investigate the validity of Harrington's and Davis's hip joint center (HJC) regression equations on a population affected by a hip deformity, (i.e., femoroacetabular impingement). Sixty-seven participants (21 healthy controls, 46 with a cam-type deformity) underwent pelvic CT imaging. Relevant bony landmarks and geometric HJCs were digitized from the images, and skin thickness was measured for the anterior and posterior superior iliac spines. Non-parametric statistical and Bland-Altman tests analyzed differences between the predicted HJC (from regression equations) and the actual HJC (from CT images). The error from Davis's model (25.0 ± 6.7 mm) was larger than Harrington's (12.3 ± 5.9 mm, p<0.001). There were no differences between groups, thus, studies on femoroacetabular impingement can implement conventional regression models. Measured skin thickness was 9.7 ± 7.0mm and 19.6 ± 10.9 mm for the anterior and posterior bony landmarks, respectively, and correlated with body mass index. Skin thickness estimates can be considered to reduce the systematic error introduced by surface markers. New adult-specific regression equations were developed from the CT dataset, with the hypothesis that they could provide better estimates when tuned to a larger adult-specific dataset. The linear models were validated on external datasets and using leave-one-out cross-validation techniques; Prediction errors were comparable to those of Harrington's model, despite the adult-specific population and the larger sample size, thus, prediction accuracy obtained from these parameters could not be improved. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Statistical text classifier to detect specific type of medical incidents.

    PubMed

    Wong, Zoie Shui-Yee; Akiyama, Masanori

    2013-01-01

    WHO Patient Safety has put focus to increase the coherence and expressiveness of patient safety classification with the foundation of International Classification for Patient Safety (ICPS). Text classification and statistical approaches has showed to be successful to identifysafety problems in the Aviation industryusing incident text information. It has been challenging to comprehend the taxonomy of medical incidents in a structured manner. Independent reporting mechanisms for patient safety incidents have been established in the UK, Canada, Australia, Japan, Hong Kong etc. This research demonstrates the potential to construct statistical text classifiers to detect specific type of medical incidents using incident text data. An illustrative example for classifying look-alike sound-alike (LASA) medication incidents using structured text from 227 advisories related to medication errors from Global Patient Safety Alerts (GPSA) is shown in this poster presentation. The classifier was built using logistic regression model. ROC curve and the AUC value indicated that this is a satisfactory good model.

  6. Recognizing and Reducing Analytical Errors and Sources of Variation in Clinical Pathology Data in Safety Assessment Studies.

    PubMed

    Schultze, A E; Irizarry, A R

    2017-02-01

    Veterinary clinical pathologists are well positioned via education and training to assist in investigations of unexpected results or increased variation in clinical pathology data. Errors in testing and unexpected variability in clinical pathology data are sometimes referred to as "laboratory errors." These alterations may occur in the preanalytical, analytical, or postanalytical phases of studies. Most of the errors or variability in clinical pathology data occur in the preanalytical or postanalytical phases. True analytical errors occur within the laboratory and are usually the result of operator or instrument error. Analytical errors are often ≤10% of all errors in diagnostic testing, and the frequency of these types of errors has decreased in the last decade. Analytical errors and increased data variability may result from instrument malfunctions, inability to follow proper procedures, undetected failures in quality control, sample misidentification, and/or test interference. This article (1) illustrates several different types of analytical errors and situations within laboratories that may result in increased variability in data, (2) provides recommendations regarding prevention of testing errors and techniques to control variation, and (3) provides a list of references that describe and advise how to deal with increased data variability.

  7. Correcting the Standard Errors of 2-Stage Residual Inclusion Estimators for Mendelian Randomization Studies

    PubMed Central

    Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A

    2017-01-01

    Abstract Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. PMID:29106476

  8. Analysis of focusing error signals by differential astigmatic method under off-center tracking in the land-groove-type optical disk

    NASA Astrophysics Data System (ADS)

    Shinoda, Masahisa; Nakatani, Hidehiko

    2015-04-01

    We theoretically calculate the behavior of the focusing error signal in the land-groove-type optical disk when the objective lens traverses on out of the radius of the optical disk. The differential astigmatic method is employed instead of the conventional astigmatic method for generating the focusing error signals. The signal behaviors are compared and analyzed in terms of the gain difference of the slope sensitivity of the focusing error signals from the land and the groove. In our calculation, the format of digital versatile disc-random access memory (DVD-RAM) is adopted as the land-groove-type optical disk model, and advantageous conditions for suppressing the gain difference are investigated. The calculation method and results described in this paper will be reflected in the next generation land-groove-type optical disks.

  9. Reduction in chemotherapy order errors with computerized physician order entry.

    PubMed

    Meisenberg, Barry R; Wright, Robert R; Brady-Copertino, Catherine J

    2014-01-01

    To measure the number and type of errors associated with chemotherapy order composition associated with three sequential methods of ordering: handwritten orders, preprinted orders, and computerized physician order entry (CPOE) embedded in the electronic health record. From 2008 to 2012, a sample of completed chemotherapy orders were reviewed by a pharmacist for the number and type of errors as part of routine performance improvement monitoring. Error frequencies for each of the three distinct methods of composing chemotherapy orders were compared using statistical methods. The rate of problematic order sets-those requiring significant rework for clarification-was reduced from 30.6% with handwritten orders to 12.6% with preprinted orders (preprinted v handwritten, P < .001) to 2.2% with CPOE (preprinted v CPOE, P < .001). The incidence of errors capable of causing harm was reduced from 4.2% with handwritten orders to 1.5% with preprinted orders (preprinted v handwritten, P < .001) to 0.1% with CPOE (CPOE v preprinted, P < .001). The number of problem- and error-containing chemotherapy orders was reduced sequentially by preprinted order sets and then by CPOE. CPOE is associated with low error rates, but it did not eliminate all errors, and the technology can introduce novel types of errors not seen with traditional handwritten or preprinted orders. Vigilance even with CPOE is still required to avoid patient harm.

  10. Model error in covariance structure models: Some implications for power and Type I error

    PubMed Central

    Coffman, Donna L.

    2010-01-01

    The present study investigated the degree to which violation of the parameter drift assumption affects the Type I error rate for the test of close fit and power analysis procedures proposed by MacCallum, Browne, and Sugawara (1996) for both the test of close fit and the test of exact fit. The parameter drift assumption states that as sample size increases both sampling error and model error (i.e. the degree to which the model is an approximation in the population) decrease. Model error was introduced using a procedure proposed by Cudeck and Browne (1992). The empirical power for both the test of close fit, in which the null hypothesis specifies that the Root Mean Square Error of Approximation (RMSEA) ≤ .05, and the test of exact fit, in which the null hypothesis specifies that RMSEA = 0, is compared with the theoretical power computed using the MacCallum et al. (1996) procedure. The empirical power and theoretical power for both the test of close fit and the test of exact fit are nearly identical under violations of the assumption. The results also indicated that the test of close fit maintains the nominal Type I error rate under violations of the assumption. PMID:21331302

  11. Policies on documentation and disciplinary action in hospital pharmacies after a medication error.

    PubMed

    Bauman, A N; Pedersen, C A; Schommer, J C; Griffith, N L

    2001-06-15

    Hospital pharmacies were surveyed about policies on medication error documentation and actions taken against pharmacists involved in an error. The survey was mailed to 500 randomly selected hospital pharmacy directors in the United States. Data were collected on the existence of medication error reporting policies, what types of errors were documented and how, and hospital demographics. The response rate was 28%. Virtually all of the hospitals had policies and procedures for medication error reporting. Most commonly, documentation of oral and written reprimand was placed in the personnel file of a pharmacist involved in an error. One sixth of respondents had no policy on documentation or disciplinary action in the event of an error. Approximately one fourth of respondents reported that suspension or termination had been used as a form of disciplinary action; legal action was rarely used. Many respondents said errors that caused harm (42%) or death (40%) to the patient were documented in the personnel file, but 34% of hospitals did not document errors in the personnel file regardless of error type. Nearly three fourths of respondents differentiated between errors caught and not caught before a medication leaves the pharmacy and between errors caught and not caught before administration to the patient. More emphasis is needed on documentation of medication errors in hospital pharmacies.

  12. Maximum type 1 error rate inflation in multiarmed clinical trials with adaptive interim sample size modifications.

    PubMed

    Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz

    2014-07-01

    Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Association between workarounds and medication administration errors in bar-code-assisted medication administration in hospitals.

    PubMed

    van der Veen, Willem; van den Bemt, Patricia M L A; Wouters, Hans; Bates, David W; Twisk, Jos W R; de Gier, Johan J; Taxis, Katja; Duyvendak, Michiel; Luttikhuis, Karen Oude; Ros, Johannes J W; Vasbinder, Erwin C; Atrafi, Maryam; Brasse, Bjorn; Mangelaars, Iris

    2018-04-01

    To study the association of workarounds with medication administration errors using barcode-assisted medication administration (BCMA), and to determine the frequency and types of workarounds and medication administration errors. A prospective observational study in Dutch hospitals using BCMA to administer medication. Direct observation was used to collect data. Primary outcome measure was the proportion of medication administrations with one or more medication administration errors. Secondary outcome was the frequency and types of workarounds and medication administration errors. Univariate and multivariate multilevel logistic regression analysis were used to assess the association between workarounds and medication administration errors. Descriptive statistics were used for the secondary outcomes. We included 5793 medication administrations for 1230 inpatients. Workarounds were associated with medication administration errors (adjusted odds ratio 3.06 [95% CI: 2.49-3.78]). Most commonly, procedural workarounds were observed, such as not scanning at all (36%), not scanning patients because they did not wear a wristband (28%), incorrect medication scanning, multiple medication scanning, and ignoring alert signals (11%). Common types of medication administration errors were omissions (78%), administration of non-ordered drugs (8.0%), and wrong doses given (6.0%). Workarounds are associated with medication administration errors in hospitals using BCMA. These data suggest that BCMA needs more post-implementation evaluation if it is to achieve the intended benefits for medication safety. In hospitals using barcode-assisted medication administration, workarounds occurred in 66% of medication administrations and were associated with large numbers of medication administration errors.

  14. FipsOrtho: A Spell Checker for Learners of French

    ERIC Educational Resources Information Center

    L'Haire, Sebastien

    2007-01-01

    This paper presents FipsOrtho, a spell checker targeted at learners of French, and a corpus of learners' errors which has been gathered to test the system and to get a sample of specific language learners' errors. Spell checkers are a standard feature of many software products, however they are not designed for specific language learners' errors.…

  15. The ADRA2B gene in the production of false memories for affective information in healthy female volunteers.

    PubMed

    Fairfield, Beth; Mammarella, Nicola; Di Domenico, Alberto; D'Aurora, Marco; Stuppia, Liborio; Gatta, Valentina

    2017-08-30

    False memories are common memory distortions in everyday life and seem to increase with affectively connoted complex information. In line with recent studies showing a significant interaction between the noradrenergic system and emotional memory, we investigated whether healthy volunteer carriers of the deletion variant of the ADRA2B gene that codes for the α2b-adrenergic receptor are more prone to false memories than non-carriers. In this study, we collected genotype data from 212 healthy female volunteers; 91 ADRA2B carriers and 121 non-carriers. To assess gene effects on false memories for affective information, factorial mixed model analysis of variances (ANOVAs) were conducted with genotype as the between-subjects factor and type of memory error as the within-subjects factor. We found that although carriers and non-carriers made comparable numbers of false memory errors, they showed differences in the direction of valence biases, especially for inferential causal errors. Specifically, carriers produced fewer causal false memory errors for scripts with a negative outcome, whereas non-carriers showed a more general emotional effect and made fewer causal errors with both positive and negative outcomes. These findings suggest that putatively higher levels of noradrenaline in deletion carriers may enhance short-term consolidation of negative information and lead to fewer memory distortions when facing negative events. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Down's syndrome and the acquisition of phonology by Cantonese-speaking children.

    PubMed

    So, L K; Dodd, B J

    1994-10-01

    The phonological abilities of two groups of 4-9-year-old intellectually impaired Cantonese-speaking children are described. Children with Down's syndrome did not differ from matched non-Down's syndrome controls in terms of a lexical comprehension measure, the size of their phoneme repertoires, the range of sounds affected by articulatory imprecision, or the number of consonants, vowels or tones produced in error. However, the types of errors made by the Down's syndrome children were different from those made by the control subjects. Cantonese-speaking children with Down's syndrome, as compared with controls, made a greater number of inconsistent errors, were more likely to produce non-developmental errors and were better in imitation than in spontaneous production. Despite extensive differences between the phonological structures of Cantonese and English, children with Down's syndrome acquiring these languages show the same characteristic pattern of speech errors. One unexpected finding was that the control group of non-Down's syndrome children failed to present with delayed phonological development typically reported for their English-speaking counterparts. The argument made is that cross-linguistic studies of intellectually impaired children's language acquisition provide evidence concerning language-specific characteristics of impairment, as opposed to those characteristics that, remaining constant across languages, are an integral part of the disorder. The results reported here support the hypothesis that the speech disorder typically associated with Down's syndrome arises from impaired phonological planning, i.e. a cognitive linguistic deficit.

  17. Effects of tRNA modification on translational accuracy depend on intrinsic codon-anticodon strength.

    PubMed

    Manickam, Nandini; Joshi, Kartikeya; Bhatt, Monika J; Farabaugh, Philip J

    2016-02-29

    Cellular health and growth requires protein synthesis to be both efficient to ensure sufficient production, and accurate to avoid producing defective or unstable proteins. The background of misreading error frequency by individual tRNAs is as low as 2 × 10(-6) per codon but is codon-specific with some error frequencies above 10(-3) per codon. Here we test the effect on error frequency of blocking post-transcriptional modifications of the anticodon loops of four tRNAs in Escherichia coli. We find two types of responses to removing modification. Blocking modification of tRNA(UUC)(Glu) and tRNA(QUC)(Asp) increases errors, suggesting that the modifications act at least in part to maintain accuracy. Blocking even identical modifications of tRNA(UUU)(Lys) and tRNA(QUA)(Tyr) has the opposite effect of decreasing errors. One explanation could be that the modifications play opposite roles in modulating misreading by the two classes of tRNAs. Given available evidence that modifications help preorder the anticodon to allow it to recognize the codons, however, the simpler explanation is that unmodified 'weak' tRNAs decode too inefficiently to compete against cognate tRNAs that normally decode target codons, which would reduce the frequency of misreading. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  18. A Likelihood-Based Framework for Association Analysis of Allele-Specific Copy Numbers.

    PubMed

    Hu, Y J; Lin, D Y; Sun, W; Zeng, D

    2014-10-01

    Copy number variants (CNVs) and single nucleotide polymorphisms (SNPs) co-exist throughout the human genome and jointly contribute to phenotypic variations. Thus, it is desirable to consider both types of variants, as characterized by allele-specific copy numbers (ASCNs), in association studies of complex human diseases. Current SNP genotyping technologies capture the CNV and SNP information simultaneously via fluorescent intensity measurements. The common practice of calling ASCNs from the intensity measurements and then using the ASCN calls in downstream association analysis has important limitations. First, the association tests are prone to false-positive findings when differential measurement errors between cases and controls arise from differences in DNA quality or handling. Second, the uncertainties in the ASCN calls are ignored. We present a general framework for the integrated analysis of CNVs and SNPs, including the analysis of total copy numbers as a special case. Our approach combines the ASCN calling and the association analysis into a single step while allowing for differential measurement errors. We construct likelihood functions that properly account for case-control sampling and measurement errors. We establish the asymptotic properties of the maximum likelihood estimators and develop EM algorithms to implement the corresponding inference procedures. The advantages of the proposed methods over the existing ones are demonstrated through realistic simulation studies and an application to a genome-wide association study of schizophrenia. Extensions to next-generation sequencing data are discussed.

  19. Spatial heterogeneity of type I error for local cluster detection tests

    PubMed Central

    2014-01-01

    Background Just as power, type I error of cluster detection tests (CDTs) should be spatially assessed. Indeed, CDTs’ type I error and power have both a spatial component as CDTs both detect and locate clusters. In the case of type I error, the spatial distribution of wrongly detected clusters (WDCs) can be particularly affected by edge effect. This simulation study aims to describe the spatial distribution of WDCs and to confirm and quantify the presence of edge effect. Methods A simulation of 40 000 datasets has been performed under the null hypothesis of risk homogeneity. The simulation design used realistic parameters from survey data on birth defects, and in particular, two baseline risks. The simulated datasets were analyzed using the Kulldorff’s spatial scan as a commonly used test whose behavior is otherwise well known. To describe the spatial distribution of type I error, we defined the participation rate for each spatial unit of the region. We used this indicator in a new statistical test proposed to confirm, as well as quantify, the edge effect. Results The predefined type I error of 5% was respected for both baseline risks. Results showed strong edge effect in participation rates, with a descending gradient from center to edge, and WDCs more often centrally situated. Conclusions In routine analysis of real data, clusters on the edge of the region should be carefully considered as they rarely occur when there is no cluster. Further work is needed to combine results from power studies with this work in order to optimize CDTs performance. PMID:24885343

  20. Feasibility of neuro-morphic computing to emulate error-conflict based decision making.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Branch, Darren W.

    2009-09-01

    A key aspect of decision making is determining when errors or conflicts exist in information and knowing whether to continue or terminate an action. Understanding the error-conflict processing is crucial in order to emulate higher brain functions in hardware and software systems. Specific brain regions, most notably the anterior cingulate cortex (ACC) are known to respond to the presence of conflicts in information by assigning a value to an action. Essentially, this conflict signal triggers strategic adjustments in cognitive control, which serve to prevent further conflict. The most probable mechanism is the ACC reports and discriminates different types of feedback,more » both positive and negative, that relate to different adaptations. Unique cells called spindle neurons that are primarily found in the ACC (layer Vb) are known to be responsible for cognitive dissonance (disambiguation between alternatives). Thus, the ACC through a specific set of cells likely plays a central role in the ability of humans to make difficult decisions and solve challenging problems in the midst of conflicting information. In addition to dealing with cognitive dissonance, decision making in high consequence scenarios also relies on the integration of multiple sets of information (sensory, reward, emotion, etc.). Thus, a second area of interest for this proposal lies in the corticostriatal networks that serve as an integration region for multiple cognitive inputs. In order to engineer neurological decision making processes in silicon devices, we will determine the key cells, inputs, and outputs of conflict/error detection in the ACC region. The second goal is understand in vitro models of corticostriatal networks and the impact of physical deficits on decision making, specifically in stressful scenarios with conflicting streams of data from multiple inputs. We will elucidate the mechanisms of cognitive data integration in order to implement a future corticostriatal-like network in silicon devices for improved decision processing.« less

  1. [Refractive errors in patients with cerebral palsy].

    PubMed

    Mrugacz, Małgorzata; Bandzul, Krzysztof; Kułak, Wojciech; Poppe, Ewa; Jurowski, Piotr

    2013-04-01

    Ocular changes are common in patients with cerebral palsy (CP) and they exist in about 50% of cases. The most common are refractive errors and strabismus disease. The aim of the paper was to estimate the relativeness between refractive errors and neurological pathologies in patients with selected types of CP. MATERIAL AND METHODS. The subject of the analysis was showing refractive errors in patients within two groups of CP: diplegia spastica and tetraparesis, with nervous system pathologies taken into account. Results. This study was proven some correlations between refractive errors and type of CP and severity of the CP classified in GMFCS scale. Refractive errors were more common in patients with tetraparesis than with diplegia spastica. In the group with diplegia spastica more common were myopia and astigmatism, however in tetraparesis - hyperopia.

  2. A Mixture Modeling Framework for Differential Analysis of High-Throughput Data

    PubMed Central

    Taslim, Cenny; Lin, Shili

    2014-01-01

    The inventions of microarray and next generation sequencing technologies have revolutionized research in genomics; platforms have led to massive amount of data in gene expression, methylation, and protein-DNA interactions. A common theme among a number of biological problems using high-throughput technologies is differential analysis. Despite the common theme, different data types have their own unique features, creating a “moving target” scenario. As such, methods specifically designed for one data type may not lead to satisfactory results when applied to another data type. To meet this challenge so that not only currently existing data types but also data from future problems, platforms, or experiments can be analyzed, we propose a mixture modeling framework that is flexible enough to automatically adapt to any moving target. More specifically, the approach considers several classes of mixture models and essentially provides a model-based procedure whose model is adaptive to the particular data being analyzed. We demonstrate the utility of the methodology by applying it to three types of real data: gene expression, methylation, and ChIP-seq. We also carried out simulations to gauge the performance and showed that the approach can be more efficient than any individual model without inflating type I error. PMID:25057284

  3. Nearly two decades using the check-type to prevent ABO incompatible transfusions: one institution's experience.

    PubMed

    Figueroa, Priscila I; Ziman, Alyssa; Wheeler, Christine; Gornbein, Jeffrey; Monson, Michael; Calhoun, Loni

    2006-09-01

    To detect miscollected (wrong blood in tube [WBIT]) samples, our institution requires a second independently drawn sample (check-type [CT]) on previously untyped, non-group O patients who are likely to require transfusion. During the 17-year period addressed by this report, 94 WBIT errors were detected: 57% by comparison with a historic blood type, 7% by the CT, and 35% by other means. The CT averted 5 potential ABO-incompatible transfusions. Our corrected WBIT error rate is 1 in 3,713 for verified samples tested between 2000 and 2003, the period for which actual number of CTs performed was available. The estimated rate of WBIT for the 17-year period is 1 in 2,262 samples. ABO-incompatible transfusions due to WBIT-type errors are avoided by comparison of current blood type results with a historic type, and the CT is an effective way to create a historic type.

  4. Error Propagation Dynamics of PIV-based Pressure Field Calculations: How well does the pressure Poisson solver perform inherently?

    PubMed

    Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd

    2016-08-01

    Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type.

  5. Compound Stimulus Presentation Does Not Deepen Extinction in Human Causal Learning

    PubMed Central

    Griffiths, Oren; Holmes, Nathan; Westbrook, R. Fred

    2017-01-01

    Models of associative learning have proposed that cue-outcome learning critically depends on the degree of prediction error encountered during training. Two experiments examined the role of error-driven extinction learning in a human causal learning task. Target cues underwent extinction in the presence of additional cues, which differed in the degree to which they predicted the outcome, thereby manipulating outcome expectancy and, in the absence of any change in reinforcement, prediction error. These prediction error manipulations have each been shown to modulate extinction learning in aversive conditioning studies. While both manipulations resulted in increased prediction error during training, neither enhanced extinction in the present human learning task (one manipulation resulted in less extinction at test). The results are discussed with reference to the types of associations that are regulated by prediction error, the types of error terms involved in their regulation, and how these interact with parameters involved in training. PMID:28232809

  6. Context Specificity of Post-Error and Post-Conflict Cognitive Control Adjustments

    PubMed Central

    Forster, Sarah E.; Cho, Raymond Y.

    2014-01-01

    There has been accumulating evidence that cognitive control can be adaptively regulated by monitoring for processing conflict as an index of online control demands. However, it is not yet known whether top-down control mechanisms respond to processing conflict in a manner specific to the operative task context or confer a more generalized benefit. While previous studies have examined the taskset-specificity of conflict adaptation effects, yielding inconsistent results, control-related performance adjustments following errors have been largely overlooked. This gap in the literature underscores recent debate as to whether post-error performance represents a strategic, control-mediated mechanism or a nonstrategic consequence of attentional orienting. In the present study, evidence of generalized control following both high conflict correct trials and errors was explored in a task-switching paradigm. Conflict adaptation effects were not found to generalize across tasksets, despite a shared response set. In contrast, post-error slowing effects were found to extend to the inactive taskset and were predictive of enhanced post-error accuracy. In addition, post-error performance adjustments were found to persist for several trials and across multiple task switches, a finding inconsistent with attentional orienting accounts of post-error slowing. These findings indicate that error-related control adjustments confer a generalized performance benefit and suggest dissociable mechanisms of post-conflict and post-error control. PMID:24603900

  7. Online automatic tuning and control for fed-batch cultivation

    PubMed Central

    van Straten, Gerrit; van der Pol, Leo A.; van Boxtel, Anton J. B.

    2007-01-01

    Performance of controllers applied in biotechnological production is often below expectation. Online automatic tuning has the capability to improve control performance by adjusting control parameters. This work presents automatic tuning approaches for model reference specific growth rate control during fed-batch cultivation. The approaches are direct methods that use the error between observed specific growth rate and its set point; systematic perturbations of the cultivation are not necessary. Two automatic tuning methods proved to be efficient, in which the adaptation rate is based on a combination of the error, squared error and integral error. These methods are relatively simple and robust against disturbances, parameter uncertainties, and initialization errors. Application of the specific growth rate controller yields a stable system. The controller and automatic tuning methods are qualified by simulations and laboratory experiments with Bordetella pertussis. PMID:18157554

  8. Analysis of Structure and Specific Functional Groups Involved in Acetylcholinesterase Catalysis and Inhibition

    DTIC Science & Technology

    1992-12-15

    et al., 1990). 2. SRpodoptera frugiperda (Sf9. Cells were typically grown in 250 mL of medium in a 500-mL spinner flask with slow stirring at 27"C in...reasonably good expression systems in Spodoptera for preparing large quantities of enzyme. The enzymes prepared from the baculovirus-Sjodo tera system were...4Standard Errors) for Wild-Type and Mutant Acetylcholinesterases Expressed in a Baculovirus- Spodoptera System’ enzyme 10’K, (M) Km tl/K. .. t 101K

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elliott, C.J.; McVey, B.; Quimby, D.C.

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of thesemore » errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.« less

  10. Emergency department discharge prescription errors in an academic medical center

    PubMed Central

    Belanger, April; Devine, Lauren T.; Lane, Aaron; Condren, Michelle E.

    2017-01-01

    This study described discharge prescription medication errors written for emergency department patients. This study used content analysis in a cross-sectional design to systematically categorize prescription errors found in a report of 1000 discharge prescriptions submitted in the electronic medical record in February 2015. Two pharmacy team members reviewed the discharge prescription list for errors. Open-ended data were coded by an additional rater for agreement on coding categories. Coding was based upon majority rule. Descriptive statistics were used to address the study objective. Categories evaluated were patient age, provider type, drug class, and type and time of error. The discharge prescription error rate out of 1000 prescriptions was 13.4%, with “incomplete or inadequate prescription” being the most commonly detected error (58.2%). The adult and pediatric error rates were 11.7% and 22.7%, respectively. The antibiotics reviewed had the highest number of errors. The highest within-class error rates were with antianginal medications, antiparasitic medications, antacids, appetite stimulants, and probiotics. Emergency medicine residents wrote the highest percentage of prescriptions (46.7%) and had an error rate of 9.2%. Residents of other specialties wrote 340 prescriptions and had an error rate of 20.9%. Errors occurred most often between 10:00 am and 6:00 pm. PMID:28405061

  11. Scheduling periodic jobs using imprecise results

    NASA Technical Reports Server (NTRS)

    Chung, Jen-Yao; Liu, Jane W. S.; Lin, Kwei-Jay

    1987-01-01

    One approach to avoid timing faults in hard, real-time systems is to make available intermediate, imprecise results produced by real-time processes. When a result of the desired quality cannot be produced in time, an imprecise result of acceptable quality produced before the deadline can be used. The problem of scheduling periodic jobs to meet deadlines on a system that provides the necessary programming language primitives and run-time support for processes to return imprecise results is discussed. Since the scheduler may choose to terminate a task before it is completed, causing it to produce an acceptable but imprecise result, the amount of processor time assigned to any task in a valid schedule can be less than the amount of time required to complete the task. A meaningful formulation of the scheduling problem must take into account the overall quality of the results. Depending on the different types of undesirable effects caused by errors, jobs are classified as type N or type C. For type N jobs, the effects of errors in results produced in different periods are not cumulative. A reasonable performance measure is the average error over all jobs. Three heuristic algorithms that lead to feasible schedules with small average errors are described. For type C jobs, the undesirable effects of errors produced in different periods are cumulative. Schedulability criteria of type C jobs are discussed.

  12. The role of model errors represented by nonlinear forcing singular vector tendency error in causing the "spring predictability barrier" within ENSO predictions

    NASA Astrophysics Data System (ADS)

    Duan, Wansuo; Zhao, Peng

    2017-04-01

    Within the Zebiak-Cane model, the nonlinear forcing singular vector (NFSV) approach is used to investigate the role of model errors in the "Spring Predictability Barrier" (SPB) phenomenon within ENSO predictions. NFSV-related errors have the largest negative effect on the uncertainties of El Niño predictions. NFSV errors can be classified into two types: the first is characterized by a zonal dipolar pattern of SST anomalies (SSTA), with the western poles centered in the equatorial central-western Pacific exhibiting positive anomalies and the eastern poles in the equatorial eastern Pacific exhibiting negative anomalies; and the second is characterized by a pattern almost opposite the first type. The first type of error tends to have the worst effects on El Niño growth-phase predictions, whereas the latter often yields the largest negative effects on decaying-phase predictions. The evolution of prediction errors caused by NFSV-related errors exhibits prominent seasonality, with the fastest error growth in the spring and/or summer seasons; hence, these errors result in a significant SPB related to El Niño events. The linear counterpart of NFSVs, the (linear) forcing singular vector (FSV), induces a less significant SPB because it contains smaller prediction errors. Random errors cannot generate a SPB for El Niño events. These results show that the occurrence of an SPB is related to the spatial patterns of tendency errors. The NFSV tendency errors cause the most significant SPB for El Niño events. In addition, NFSVs often concentrate these large value errors in a few areas within the equatorial eastern and central-western Pacific, which likely represent those areas sensitive to El Niño predictions associated with model errors. Meanwhile, these areas are also exactly consistent with the sensitive areas related to initial errors determined by previous studies. This implies that additional observations in the sensitive areas would not only improve the accuracy of the initial field but also promote the reduction of model errors to greatly improve ENSO forecasts.

  13. Asymmetric Memory Circuit Would Resist Soft Errors

    NASA Technical Reports Server (NTRS)

    Buehler, Martin G.; Perlman, Marvin

    1990-01-01

    Some nonlinear error-correcting codes more efficient in presence of asymmetry. Combination of circuit-design and coding concepts expected to make integrated-circuit random-access memories more resistant to "soft" errors (temporary bit errors, also called "single-event upsets" due to ionizing radiation). Integrated circuit of new type made deliberately more susceptible to one kind of bit error than to other, and associated error-correcting code adapted to exploit this asymmetry in error probabilities.

  14. A Conjoint Analysis Framework for Evaluating User Preferences in Machine Translation

    PubMed Central

    Kirchhoff, Katrin; Capurro, Daniel; Turner, Anne M.

    2013-01-01

    Despite much research on machine translation (MT) evaluation, there is surprisingly little work that directly measures users’ intuitive or emotional preferences regarding different types of MT errors. However, the elicitation and modeling of user preferences is an important prerequisite for research on user adaptation and customization of MT engines. In this paper we explore the use of conjoint analysis as a formal quantitative framework to assess users’ relative preferences for different types of translation errors. We apply our approach to the analysis of MT output from translating public health documents from English into Spanish. Our results indicate that word order errors are clearly the most dispreferred error type, followed by word sense, morphological, and function word errors. The conjoint analysis-based model is able to predict user preferences more accurately than a baseline model that chooses the translation with the fewest errors overall. Additionally we analyze the effect of using a crowd-sourced respondent population versus a sample of domain experts and observe that main preference effects are remarkably stable across the two samples. PMID:24683295

  15. Exact test-based approach for equivalence test with parameter margin.

    PubMed

    Cassie Dong, Xiaoyu; Bian, Yuanyuan; Tsong, Yi; Wang, Tianhua

    2017-01-01

    The equivalence test has a wide range of applications in pharmaceutical statistics which we need to test for the similarity between two groups. In recent years, the equivalence test has been used in assessing the analytical similarity between a proposed biosimilar product and a reference product. More specifically, the mean values of the two products for a given quality attribute are compared against an equivalence margin in the form of ±f × σ R , where ± f × σ R is a function of the reference variability. In practice, this margin is unknown and is estimated from the sample as ±f × S R . If we use this estimated margin with the classic t-test statistic on the equivalence test for the means, both Type I and Type II error rates may inflate. To resolve this issue, we develop an exact-based test method and compare this method with other proposed methods, such as the Wald test, the constrained Wald test, and the Generalized Pivotal Quantity (GPQ) in terms of Type I error rate and power. Application of those methods on data analysis is also provided in this paper. This work focuses on the development and discussion of the general statistical methodology and is not limited to the application of analytical similarity.

  16. Spelling Errors of Dyslexic Children in Bosnian Language With Transparent Orthography.

    PubMed

    Duranović, Mirela

    The purpose of this study was to explore the nature of spelling errors made by children with dyslexia in Bosnian language with transparent orthography. Three main error categories were distinguished: phonological, orthographic, and grammatical errors. An analysis of error type showed 86% of phonological errors,10% of orthographic errors, and 4% of grammatical errors. Furthermore, the majority errors were the omissions and substitutions, followed by the insertions, omission of rules of assimilation by voicing, and errors with utilization of suffix. We can conclude that phonological errors were dominant in children with dyslexia at all grade levels.

  17. Speech errors of amnesic H.M.: unlike everyday slips-of-the-tongue.

    PubMed

    MacKay, Donald G; James, Lori E; Hadley, Christopher B; Fogler, Kethera A

    2011-03-01

    Three language production studies indicate that amnesic H.M. produces speech errors unlike everyday slips-of-the-tongue. Study 1 was a naturalistic task: H.M. and six controls closely matched for age, education, background and IQ described what makes captioned cartoons funny. Nine judges rated the descriptions blind to speaker identity and gave reliably more negative ratings for coherence, vagueness, comprehensibility, grammaticality, and adequacy of humor-description for H.M. than the controls. Study 2 examined "major errors", a novel type of speech error that is uncorrected and reduces the coherence, grammaticality, accuracy and/or comprehensibility of an utterance. The results indicated that H.M. produced seven types of major errors reliably more often than controls: substitutions, omissions, additions, transpositions, reading errors, free associations, and accuracy errors. These results contradict recent claims that H.M. retains unconscious or implicit language abilities and produces spoken discourse that is "sophisticated," "intact" and "without major errors." Study 3 examined whether three classical types of errors (omissions, additions, and substitutions of words and phrases) differed for H.M. versus controls in basic nature and relative frequency by error type. The results indicated that omissions, and especially multi-word omissions, were relatively more common for H.M. than the controls; and substitutions violated the syntactic class regularity (whereby, e.g., nouns substitute with nouns but not verbs) relatively more often for H.M. than the controls. These results suggest that H.M.'s medial temporal lobe damage impaired his ability to rapidly form new connections between units in the cortex, a process necessary to form complete and coherent internal representations for novel sentence-level plans. In short, different brain mechanisms underlie H.M.'s major errors (which reflect incomplete and incoherent sentence-level plans) versus everyday slips-of-the tongue (which reflect errors in activating pre-planned units in fully intact sentence-level plans). Implications of the results of Studies 1-3 are discussed for systems theory, binding theory and relational memory theories. Copyright © 2010 Elsevier Srl. All rights reserved.

  18. The Sources of Error in Spanish Writing.

    ERIC Educational Resources Information Center

    Justicia, Fernando; Defior, Sylvia; Pelegrina, Santiago; Martos, Francisco J.

    1999-01-01

    Determines the pattern of errors in Spanish spelling. Analyzes and proposes a classification system for the errors made by children in the initial stages of the acquisition of spelling skills. Finds the diverse forms of only 20 Spanish words produces 36% of the spelling errors in Spanish; and substitution is the most frequent type of error. (RS)

  19. BETASEQ: a powerful novel method to control type-I error inflation in partially sequenced data for rare variant association testing.

    PubMed

    Yan, Song; Li, Yun

    2014-02-15

    Despite its great capability to detect rare variant associations, next-generation sequencing is still prohibitively expensive when applied to large samples. In case-control studies, it is thus appealing to sequence only a subset of cases to discover variants and genotype the identified variants in controls and the remaining cases under the reasonable assumption that causal variants are usually enriched among cases. However, this approach leads to inflated type-I error if analyzed naively for rare variant association. Several methods have been proposed in recent literature to control type-I error at the cost of either excluding some sequenced cases or correcting the genotypes of discovered rare variants. All of these approaches thus suffer from certain extent of information loss and thus are underpowered. We propose a novel method (BETASEQ), which corrects inflation of type-I error by supplementing pseudo-variants while keeps the original sequence and genotype data intact. Extensive simulations and real data analysis demonstrate that, in most practical situations, BETASEQ leads to higher testing powers than existing approaches with guaranteed (controlled or conservative) type-I error. BETASEQ and associated R files, including documentation, examples, are available at http://www.unc.edu/~yunmli/betaseq

  20. Learning Data Set Influence on Identification Accuracy of Gas Turbine Neural Network Model

    NASA Astrophysics Data System (ADS)

    Kuznetsov, A. V.; Makaryants, G. M.

    2018-01-01

    There are many gas turbine engine identification researches via dynamic neural network models. It should minimize errors between model and real object during identification process. Questions about training data set processing of neural networks are usually missed. This article presents a study about influence of data set type on gas turbine neural network model accuracy. The identification object is thermodynamic model of micro gas turbine engine. The thermodynamic model input signal is the fuel consumption and output signal is the engine rotor rotation frequency. Four types input signals was used for creating training and testing data sets of dynamic neural network models - step, fast, slow and mixed. Four dynamic neural networks were created based on these types of training data sets. Each neural network was tested via four types test data sets. In the result 16 transition processes from four neural networks and four test data sets from analogous solving results of thermodynamic model were compared. The errors comparison was made between all neural network errors in each test data set. In the comparison result it was shown error value ranges of each test data set. It is shown that error values ranges is small therefore the influence of data set types on identification accuracy is low.

  1. Quality assurance for online adapted treatment plans: Benchmarking and delivery monitoring simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Taoran, E-mail: taoran.li.duke@gmail.com; Wu, Qiuwen; Yang, Yun

    Purpose: An important challenge facing online adaptive radiation therapy is the development of feasible and efficient quality assurance (QA). This project aimed to validate the deliverability of online adapted plans and develop a proof-of-concept online delivery monitoring system for online adaptive radiation therapy QA. Methods: The first part of this project benchmarked automatically online adapted prostate treatment plans using traditional portal dosimetry IMRT QA. The portal dosimetry QA results of online adapted plans were compared to original (unadapted) plans as well as randomly selected prostate IMRT plans from our clinic. In the second part, an online delivery monitoring system wasmore » designed and validated via a simulated treatment with intentional multileaf collimator (MLC) errors. This system was based on inputs from the dynamic machine information (DMI), which continuously reports actual MLC positions and machine monitor units (MUs) at intervals of 50 ms or less during delivery. Based on the DMI, the system performed two levels of monitoring/verification during the delivery: (1) dynamic monitoring of cumulative fluence errors resulting from leaf position deviations and visualization using fluence error maps (FEMs); and (2) verification of MLC positions against the treatment plan for potential errors in MLC motion and data transfer at each control point. Validation of the online delivery monitoring system was performed by introducing intentional systematic MLC errors (ranging from 0.5 to 2 mm) to the DMI files for both leaf banks. These DMI files were analyzed by the proposed system to evaluate the system’s performance in quantifying errors and revealing the source of errors, as well as to understand patterns in the FEMs. In addition, FEMs from 210 actual prostate IMRT beams were analyzed using the proposed system to further validate its ability to catch and identify errors, as well as establish error magnitude baselines for prostate IMRT delivery. Results: Online adapted plans were found to have similar delivery accuracy in comparison to clinical IMRT plans when validated with portal dosimetry IMRT QA. FEMs for the simulated deliveries with intentional MLC errors exhibited distinct patterns for different MLC error magnitudes and directions, indicating that the proposed delivery monitoring system is highly specific in detecting the source of errors. Implementing the proposed QA system for online adapted plans revealed excellent delivery accuracy: over 99% of leaf position differences were within 0.5 mm, and >99% of pixels in the FEMs had fluence errors within 0.5 MU. Patterns present in the FEMs and MLC control point analysis for actual patient cases agreed with the error pattern analysis results, further validating the system’s ability to reveal and differentiate MLC deviations. Calculation of the fluence map based on the DMI was performed within 2 ms after receiving each DMI input. Conclusions: The proposed online delivery monitoring system requires minimal additional resources and time commitment to the current clinical workflow while still maintaining high sensitivity to leaf position errors and specificity to error types. The presented online delivery monitoring system therefore represents a promising QA system candidate for online adaptive radiation therapy.« less

  2. Quality assurance for online adapted treatment plans: benchmarking and delivery monitoring simulation.

    PubMed

    Li, Taoran; Wu, Qiuwen; Yang, Yun; Rodrigues, Anna; Yin, Fang-Fang; Jackie Wu, Q

    2015-01-01

    An important challenge facing online adaptive radiation therapy is the development of feasible and efficient quality assurance (QA). This project aimed to validate the deliverability of online adapted plans and develop a proof-of-concept online delivery monitoring system for online adaptive radiation therapy QA. The first part of this project benchmarked automatically online adapted prostate treatment plans using traditional portal dosimetry IMRT QA. The portal dosimetry QA results of online adapted plans were compared to original (unadapted) plans as well as randomly selected prostate IMRT plans from our clinic. In the second part, an online delivery monitoring system was designed and validated via a simulated treatment with intentional multileaf collimator (MLC) errors. This system was based on inputs from the dynamic machine information (DMI), which continuously reports actual MLC positions and machine monitor units (MUs) at intervals of 50 ms or less during delivery. Based on the DMI, the system performed two levels of monitoring/verification during the delivery: (1) dynamic monitoring of cumulative fluence errors resulting from leaf position deviations and visualization using fluence error maps (FEMs); and (2) verification of MLC positions against the treatment plan for potential errors in MLC motion and data transfer at each control point. Validation of the online delivery monitoring system was performed by introducing intentional systematic MLC errors (ranging from 0.5 to 2 mm) to the DMI files for both leaf banks. These DMI files were analyzed by the proposed system to evaluate the system's performance in quantifying errors and revealing the source of errors, as well as to understand patterns in the FEMs. In addition, FEMs from 210 actual prostate IMRT beams were analyzed using the proposed system to further validate its ability to catch and identify errors, as well as establish error magnitude baselines for prostate IMRT delivery. Online adapted plans were found to have similar delivery accuracy in comparison to clinical IMRT plans when validated with portal dosimetry IMRT QA. FEMs for the simulated deliveries with intentional MLC errors exhibited distinct patterns for different MLC error magnitudes and directions, indicating that the proposed delivery monitoring system is highly specific in detecting the source of errors. Implementing the proposed QA system for online adapted plans revealed excellent delivery accuracy: over 99% of leaf position differences were within 0.5 mm, and >99% of pixels in the FEMs had fluence errors within 0.5 MU. Patterns present in the FEMs and MLC control point analysis for actual patient cases agreed with the error pattern analysis results, further validating the system's ability to reveal and differentiate MLC deviations. Calculation of the fluence map based on the DMI was performed within 2 ms after receiving each DMI input. The proposed online delivery monitoring system requires minimal additional resources and time commitment to the current clinical workflow while still maintaining high sensitivity to leaf position errors and specificity to error types. The presented online delivery monitoring system therefore represents a promising QA system candidate for online adaptive radiation therapy.

  3. Statistical theory and methodology for remote sensing data analysis

    NASA Technical Reports Server (NTRS)

    Odell, P. L.

    1974-01-01

    A model is developed for the evaluation of acreages (proportions) of different crop-types over a geographical area using a classification approach and methods for estimating the crop acreages are given. In estimating the acreages of a specific croptype such as wheat, it is suggested to treat the problem as a two-crop problem: wheat vs. nonwheat, since this simplifies the estimation problem considerably. The error analysis and the sample size problem is investigated for the two-crop approach. Certain numerical results for sample sizes are given for a JSC-ERTS-1 data example on wheat identification performance in Hill County, Montana and Burke County, North Dakota. Lastly, for a large area crop acreages inventory a sampling scheme is suggested for acquiring sample data and the problem of crop acreage estimation and the error analysis is discussed.

  4. Concomitant prescribing and dispensing errors at a Brazilian hospital: a descriptive study

    PubMed Central

    Silva, Maria das Dores Graciano; Rosa, Mário Borges; Franklin, Bryony Dean; Reis, Adriano Max Moreira; Anchieta, Lêni Márcia; Mota, Joaquim Antônio César

    2011-01-01

    OBJECTIVE: To analyze the prevalence and types of prescribing and dispensing errors occurring with high-alert medications and to propose preventive measures to avoid errors with these medications. INTRODUCTION: The prevalence of adverse events in health care has increased, and medication errors are probably the most common cause of these events. Pediatric patients are known to be a high-risk group and are an important target in medication error prevention. METHODS: Observers collected data on prescribing and dispensing errors occurring with high-alert medications for pediatric inpatients in a university hospital. In addition to classifying the types of error that occurred, we identified cases of concomitant prescribing and dispensing errors. RESULTS: One or more prescribing errors, totaling 1,632 errors, were found in 632 (89.6%) of the 705 high-alert medications that were prescribed and dispensed. We also identified at least one dispensing error in each high-alert medication dispensed, totaling 1,707 errors. Among these dispensing errors, 723 (42.4%) content errors occurred concomitantly with the prescribing errors. A subset of dispensing errors may have occurred because of poor prescription quality. The observed concomitancy should be examined carefully because improvements in the prescribing process could potentially prevent these problems. CONCLUSION: The system of drug prescribing and dispensing at the hospital investigated in this study should be improved by incorporating the best practices of medication safety and preventing medication errors. High-alert medications may be used as triggers for improving the safety of the drug-utilization system. PMID:22012039

  5. Prevalence of amblyopia and patterns of refractive error in the amblyopic children of a tertiary eye care center of Nepal.

    PubMed

    Sapkota, K; Pirouzian, A; Matta, N S

    2013-01-01

    Refractive error is a common cause of amblyopia. To determine prevalence of amblyopia and the pattern and the types of refractive error in children with amblyopia in a tertiary eye hospital of Nepal. A retrospective chart review of children diagnosed with amblyopia in the Nepal Eye Hospital (NEH) from July 2006 to June 2011 was conducted. Children of age 13+ or who had any ocular pathology were excluded. Cycloplegic refraction and an ophthalmological examination was performed for all children. The pattern of refractive error and the association between types of refractive error and types of amblyopia were determined. Amblyopia was found in 0.7 % (440) of 62,633 children examined in NEH during this period. All the amblyopic eyes of the subjects had refractive error. Fifty-six percent (248) of the patients were male and the mean age was 7.74 ± 2.97 years. Anisometropia was the most common cause of amblyopia (p less than 0.001). One third (29 %) of the subjects had bilateral amblyopia due to high ametropia. Forty percent of eyes had severe amblyopia with visual acuity of 20/120 or worse. About twothirds (59.2 %) of the eyes had astigmatism. The prevalence of amblyopia in the Nepal Eye Hospital is 0.7%. Anisometropia is the most common cause of amblyopia. Astigmatism is the most common types of refractive error in amblyopic eyes. © NEPjOPH.

  6. Robust Linear Models for Cis-eQTL Analysis.

    PubMed

    Rantalainen, Mattias; Lindgren, Cecilia M; Holmes, Christopher C

    2015-01-01

    Expression Quantitative Trait Loci (eQTL) analysis enables characterisation of functional genetic variation influencing expression levels of individual genes. In outbread populations, including humans, eQTLs are commonly analysed using the conventional linear model, adjusting for relevant covariates, assuming an allelic dosage model and a Gaussian error term. However, gene expression data generally have noise that induces heavy-tailed errors relative to the Gaussian distribution and often include atypical observations, or outliers. Such departures from modelling assumptions can lead to an increased rate of type II errors (false negatives), and to some extent also type I errors (false positives). Careful model checking can reduce the risk of type-I errors but often not type II errors, since it is generally too time-consuming to carefully check all models with a non-significant effect in large-scale and genome-wide studies. Here we propose the application of a robust linear model for eQTL analysis to reduce adverse effects of deviations from the assumption of Gaussian residuals. We present results from a simulation study as well as results from the analysis of real eQTL data sets. Our findings suggest that in many situations robust models have the potential to provide more reliable eQTL results compared to conventional linear models, particularly in respect to reducing type II errors due to non-Gaussian noise. Post-genomic data, such as that generated in genome-wide eQTL studies, are often noisy and frequently contain atypical observations. Robust statistical models have the potential to provide more reliable results and increased statistical power under non-Gaussian conditions. The results presented here suggest that robust models should be considered routinely alongside other commonly used methodologies for eQTL analysis.

  7. Planetary Transmission Diagnostics

    NASA Technical Reports Server (NTRS)

    Lewicki, David G. (Technical Monitor); Samuel, Paul D.; Conroy, Joseph K.; Pines, Darryll J.

    2004-01-01

    This report presents a methodology for detecting and diagnosing gear faults in the planetary stage of a helicopter transmission. This diagnostic technique is based on the constrained adaptive lifting algorithm. The lifting scheme, developed by Wim Sweldens of Bell Labs, is a time domain, prediction-error realization of the wavelet transform that allows for greater flexibility in the construction of wavelet bases. Classic lifting analyzes a given signal using wavelets derived from a single fundamental basis function. A number of researchers have proposed techniques for adding adaptivity to the lifting scheme, allowing the transform to choose from a set of fundamental bases the basis that best fits the signal. This characteristic is desirable for gear diagnostics as it allows the technique to tailor itself to a specific transmission by selecting a set of wavelets that best represent vibration signals obtained while the gearbox is operating under healthy-state conditions. However, constraints on certain basis characteristics are necessary to enhance the detection of local wave-form changes caused by certain types of gear damage. The proposed methodology analyzes individual tooth-mesh waveforms from a healthy-state gearbox vibration signal that was generated using the vibration separation (synchronous signal-averaging) algorithm. Each waveform is separated into analysis domains using zeros of its slope and curvature. The bases selected in each analysis domain are chosen to minimize the prediction error, and constrained to have the same-sign local slope and curvature as the original signal. The resulting set of bases is used to analyze future-state vibration signals and the lifting prediction error is inspected. The constraints allow the transform to effectively adapt to global amplitude changes, yielding small prediction errors. However, local wave-form changes associated with certain types of gear damage are poorly adapted, causing a significant change in the prediction error. The constrained adaptive lifting diagnostic algorithm is validated using data collected from the University of Maryland Transmission Test Rig and the results are discussed.

  8. The magnocellular visual pathway and facial emotion misattribution errors in schizophrenia.

    PubMed

    Bedwell, Jeffrey S; Chan, Chi C; Cohen, Ovad; Karbi, Yinnon; Shamir, Eyal; Rassovsky, Yuri

    2013-07-01

    Many individuals with schizophrenia show impairment in labeling the emotion depicted by faces, and tend to ascribe anger or fear to neutral expressions. Preliminary research has linked some of these difficulties to dysfunction in the magnocellular (M) visual pathway, which has direct projections to subcortical emotion processing regions. The current study attempted to clarify these relationships using a novel paradigm that included a red background. Diffuse red light is known to suppress the M-pathway in nonpsychiatric adults, and there is preliminary evidence that it may have the opposite (stimulating) effect in schizophrenia-spectrum disorders (SSDs). Twenty-five individuals with SSDs were compared with 31 nonpsychiatric controls using a facial emotion identification task depicting happy, angry, fearful, and sad emotions on red, green, and gray backgrounds. There was a robust interaction of group by change in errors to the red (vs. green) background for misattributing fear expressions as depicting anger (p=.001, ή(2)=.18). Specifically, controls showed a significant decrease in this type of error with the red background (p=.003, d=0.77), while the SSD group tended to increase this type of error (p=.07, d=0.54). These findings suggest that the well-established M-pathway abnormalities in SSDs may contribute to the heightened misperception of other emotions such as anger, which in turn may cause social misperceptions in the environment and elicit symptoms such as paranoia and social withdrawal. As the ventral striatum plays a primary role in identifying anger and receives efferent input from the M-pathway, it may serve as the neuroanatomical substrate in the perception of anger. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Geometric Quality Assessment of LIDAR Data Based on Swath Overlap

    NASA Astrophysics Data System (ADS)

    Sampath, A.; Heidemann, H. K.; Stensaas, G. L.

    2016-06-01

    This paper provides guidelines on quantifying the relative horizontal and vertical errors observed between conjugate features in the overlapping regions of lidar data. The quantification of these errors is important because their presence quantifies the geometric quality of the data. A data set can be said to have good geometric quality if measurements of identical features, regardless of their position or orientation, yield identical results. Good geometric quality indicates that the data are produced using sensor models that are working as they are mathematically designed, and data acquisition processes are not introducing any unforeseen distortion in the data. High geometric quality also leads to high geolocation accuracy of the data when the data acquisition process includes coupling the sensor with geopositioning systems. Current specifications (e.g. Heidemann 2014) do not provide adequate means to quantitatively measure these errors, even though they are required to be reported. Current accuracy measurement and reporting practices followed in the industry and as recommended by data specification documents also potentially underestimate the inter-swath errors, including the presence of systematic errors in lidar data. Hence they pose a risk to the user in terms of data acceptance (i.e. a higher potential for Type II error indicating risk of accepting potentially unsuitable data). For example, if the overlap area is too small or if the sampled locations are close to the center of overlap, or if the errors are sampled in flat regions when there are residual pitch errors in the data, the resultant Root Mean Square Differences (RMSD) can still be small. To avoid this, the following are suggested to be used as criteria for defining the inter-swath quality of data: a) Median Discrepancy Angle b) Mean and RMSD of Horizontal Errors using DQM measured on sloping surfaces c) RMSD for sampled locations from flat areas (defined as areas with less than 5 degrees of slope) It is suggested that 4000-5000 points are uniformly sampled in the overlapping regions of the point cloud, and depending on the surface roughness, to measure the discrepancy between swaths. Care must be taken to sample only areas of single return points only. Point-to-Plane distance based data quality measures are determined for each sample point. These measurements are used to determine the above mentioned parameters. This paper details the measurements and analysis of measurements required to determine these metrics, i.e. Discrepancy Angle, Mean and RMSD of errors in flat regions and horizontal errors obtained using measurements extracted from sloping regions (slope greater than 10 degrees). The research is a result of an ad-hoc joint working group of the US Geological Survey and the American Society for Photogrammetry and Remote Sensing (ASPRS) Airborne Lidar Committee.

  10. Investigating Patterns of Errors for Specific Comprehension and Fluency Difficulties

    ERIC Educational Resources Information Center

    Koriakin, Taylor A.; Kaufman, Alan S.

    2017-01-01

    Although word reading has traditionally been viewed as a foundational skill for development of reading fluency and comprehension, some children demonstrate "specific" reading comprehension problems, in the context of intact word reading. The purpose of this study was to identify specific patterns of errors associated with reading…

  11. Human Error: A Concept Analysis

    NASA Technical Reports Server (NTRS)

    Hansen, Frederick D.

    2007-01-01

    Human error is the subject of research in almost every industry and profession of our times. This term is part of our daily language and intuitively understood by most people however, it would be premature to assume that everyone's understanding of human error s the same. For example, human error is used to describe the outcome or consequence of human action, the causal factor of an accident, deliberate violations,a nd the actual action taken by a human being. As a result, researchers rarely agree on the either a specific definition or how to prevent human error. The purpose of this article is to explore the specific concept of human error using Concept Analysis as described by Walker and Avant (1995). The concept of human error is examined as currently used in the literature of a variety of industries and professions. Defining attributes and examples of model, borderline, and contrary cases are described. The antecedents and consequences of human error are also discussed and a definition of human error is offered.

  12. Avoiding and identifying errors in health technology assessment models: qualitative study and methodological review.

    PubMed

    Chilcott, J; Tappenden, P; Rawdin, A; Johnson, M; Kaltenthaler, E; Paisley, S; Papaioannou, D; Shippam, A

    2010-05-01

    Health policy decisions must be relevant, evidence-based and transparent. Decision-analytic modelling supports this process but its role is reliant on its credibility. Errors in mathematical decision models or simulation exercises are unavoidable but little attention has been paid to processes in model development. Numerous error avoidance/identification strategies could be adopted but it is difficult to evaluate the merits of strategies for improving the credibility of models without first developing an understanding of error types and causes. The study aims to describe the current comprehension of errors in the HTA modelling community and generate a taxonomy of model errors. Four primary objectives are to: (1) describe the current understanding of errors in HTA modelling; (2) understand current processes applied by the technology assessment community for avoiding errors in development, debugging and critically appraising models for errors; (3) use HTA modellers' perceptions of model errors with the wider non-HTA literature to develop a taxonomy of model errors; and (4) explore potential methods and procedures to reduce the occurrence of errors in models. It also describes the model development process as perceived by practitioners working within the HTA community. A methodological review was undertaken using an iterative search methodology. Exploratory searches informed the scope of interviews; later searches focused on issues arising from the interviews. Searches were undertaken in February 2008 and January 2009. In-depth qualitative interviews were performed with 12 HTA modellers from academic and commercial modelling sectors. All qualitative data were analysed using the Framework approach. Descriptive and explanatory accounts were used to interrogate the data within and across themes and subthemes: organisation, roles and communication; the model development process; definition of error; types of model error; strategies for avoiding errors; strategies for identifying errors; and barriers and facilitators. There was no common language in the discussion of modelling errors and there was inconsistency in the perceived boundaries of what constitutes an error. Asked about the definition of model error, there was a tendency for interviewees to exclude matters of judgement from being errors and focus on 'slips' and 'lapses', but discussion of slips and lapses comprised less than 20% of the discussion on types of errors. Interviewees devoted 70% of the discussion to softer elements of the process of defining the decision question and conceptual modelling, mostly the realms of judgement, skills, experience and training. The original focus concerned model errors, but it may be more useful to refer to modelling risks. Several interviewees discussed concepts of validation and verification, with notable consistency in interpretation: verification meaning the process of ensuring that the computer model correctly implemented the intended model, whereas validation means the process of ensuring that a model is fit for purpose. Methodological literature on verification and validation of models makes reference to the Hermeneutic philosophical position, highlighting that the concept of model validation should not be externalized from the decision-makers and the decision-making process. Interviewees demonstrated examples of all major error types identified in the literature: errors in the description of the decision problem, in model structure, in use of evidence, in implementation of the model, in operation of the model, and in presentation and understanding of results. The HTA error classifications were compared against existing classifications of model errors in the literature. A range of techniques and processes are currently used to avoid errors in HTA models: engaging with clinical experts, clients and decision-makers to ensure mutual understanding, producing written documentation of the proposed model, explicit conceptual modelling, stepping through skeleton models with experts, ensuring transparency in reporting, adopting standard housekeeping techniques, and ensuring that those parties involved in the model development process have sufficient and relevant training. Clarity and mutual understanding were identified as key issues. However, their current implementation is not framed within an overall strategy for structuring complex problems. Some of the questioning may have biased interviewees responses but as all interviewees were represented in the analysis no rebalancing of the report was deemed necessary. A potential weakness of the literature review was its focus on spreadsheet and program development rather than specifically on model development. It should also be noted that the identified literature concerning programming errors was very narrow despite broad searches being undertaken. Published definitions of overall model validity comprising conceptual model validation, verification of the computer model, and operational validity of the use of the model in addressing the real-world problem are consistent with the views expressed by the HTA community and are therefore recommended as the basis for further discussions of model credibility. Such discussions should focus on risks, including errors of implementation, errors in matters of judgement and violations. Discussions of modelling risks should reflect the potentially complex network of cognitive breakdowns that lead to errors in models and existing research on the cognitive basis of human error should be included in an examination of modelling errors. There is a need to develop a better understanding of the skills requirements for the development, operation and use of HTA models. Interaction between modeller and client in developing mutual understanding of a model establishes that model's significance and its warranty. This highlights that model credibility is the central concern of decision-makers using models so it is crucial that the concept of model validation should not be externalized from the decision-makers and the decision-making process. Recommendations for future research would be studies of verification and validation; the model development process; and identification of modifications to the modelling process with the aim of preventing the occurrence of errors and improving the identification of errors in models.

  13. Perception Of "Features" And "Objects": Applications To The Design Of Instrument Panel Displays

    NASA Astrophysics Data System (ADS)

    Poynter, Douglas; Czarnomski, Alan J.

    1988-10-01

    An experiment was conducted to determine whether socalled feature displays allow for faster and more accurate processing compared to object displays. Previous psychological studies indicate that features can be processed in parallel across the visual field, whereas objects must be processed one at a time with the aid of attentional focus. Numbers and letters are examples of objects; line orientation and color are examples of features. In this experiment, subjects were asked to search displays composed of up to 16 elements for the presence of specific elements. The ability to detect, localize, and identify targets was influenced by display format. Digital errors increased with the number of elements, the number of targets, and the distance of the target from the fixation point. Line orientation errors increased only with the number of targets. Several other display types were evaluated, and each produced a pattern of errors similar to either digital or line orientation format. Results of the study were discussed in terms of Feature Integration Theory, which distinguishes between elements that are processed with parallel versus serial mechanisms.

  14. The epidemiology and type of medication errors reported to the National Poisons Information Centre of Ireland.

    PubMed

    Cassidy, Nicola; Duggan, Edel; Williams, David J P; Tracey, Joseph A

    2011-07-01

    Medication errors are widely reported for hospitalised patients, but limited data are available for medication errors that occur in community-based and clinical settings. Epidemiological data from poisons information centres enable characterisation of trends in medication errors occurring across the healthcare spectrum. The objective of this study was to characterise the epidemiology and type of medication errors reported to the National Poisons Information Centre (NPIC) of Ireland. A 3-year prospective study on medication errors reported to the NPIC was conducted from 1 January 2007 to 31 December 2009 inclusive. Data on patient demographics, enquiry source, location, pharmaceutical agent(s), type of medication error, and treatment advice were collated from standardised call report forms. Medication errors were categorised as (i) prescribing error (i.e. physician error), (ii) dispensing error (i.e. pharmacy error), and (iii) administration error involving the wrong medication, the wrong dose, wrong route, or the wrong time. Medication errors were reported for 2348 individuals, representing 9.56% of total enquiries to the NPIC over 3 years. In total, 1220 children and adolescents under 18 years of age and 1128 adults (≥ 18 years old) experienced a medication error. The majority of enquiries were received from healthcare professionals, but members of the public accounted for 31.3% (n = 736) of enquiries. Most medication errors occurred in a domestic setting (n = 2135), but a small number occurred in healthcare facilities: nursing homes (n = 110, 4.68%), hospitals (n = 53, 2.26%), and general practitioner surgeries (n = 32, 1.36%). In children, medication errors with non-prescription pharmaceuticals predominated (n = 722) and anti-pyretics and non-opioid analgesics, anti-bacterials, and cough and cold preparations were the main pharmaceutical classes involved. Medication errors with prescription medication predominated for adults (n = 866) and the major medication classes included anti-pyretics and non-opioid analgesics, psychoanaleptics, and psychleptic agents. Approximately 97% (n = 2279) of medication errors were as a result of drug administration errors (comprising a double dose [n = 1040], wrong dose [n = 395], wrong medication [n = 597], wrong route [n = 133], and wrong time [n = 110]). Prescribing and dispensing errors accounted for 0.68% (n = 16) and 2.26% (n = 53) of errors, respectively. Empirical data from poisons information centres facilitate the characterisation of medication errors occurring in the community and across the healthcare spectrum. Poison centre data facilitate the detection of subtle trends in medication errors and can contribute to pharmacovigilance. Collaboration between pharmaceutical manufacturers, consumers, medical, and regulatory communities is needed to advance patient safety and reduce medication errors.

  15. Correcting the Standard Errors of 2-Stage Residual Inclusion Estimators for Mendelian Randomization Studies.

    PubMed

    Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A

    2017-11-01

    Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.

  16. Medication Errors in Patients with Enteral Feeding Tubes in the Intensive Care Unit.

    PubMed

    Sohrevardi, Seyed Mojtaba; Jarahzadeh, Mohammad Hossein; Mirzaei, Ehsan; Mirjalili, Mahtabalsadat; Tafti, Arefeh Dehghani; Heydari, Behrooz

    2017-01-01

    Most patients admitted to Intensive Care Units (ICU) have problems in using oral medication or ingesting solid forms of drugs. Selecting the most suitable dosage form in such patients is a challenge. The current study was conducted to assess the frequency and types of errors of oral medication administration in patients with enteral feeding tubes or suffering swallowing problems. A cross-sectional study was performed in the ICU of Shahid Sadoughi Hospital, Yazd, Iran. Patients were assessed for the incidence and types of medication errors occurring in the process of preparation and administration of oral medicines. Ninety-four patients were involved in this study and 10,250 administrations were observed. Totally, 4753 errors occurred among the studied patients. The most commonly used drugs were pantoprazole tablet, piracetam syrup, and losartan tablet. A total of 128 different types of drugs and nine different oral pharmaceutical preparations were prescribed for the patients. Forty-one (35.34%) out of 116 different solid drugs (except effervescent tablets and powders) could be substituted by liquid or injectable forms. The most common error was the wrong time of administration. Errors of wrong dose preparation and administration accounted for 24.04% and 25.31% of all errors, respectively. In this study, at least three-fourth of the patients experienced medication errors. The occurrence of these errors can greatly impair the quality of the patients' pharmacotherapy, and more attention should be paid to this issue.

  17. Caffeine enhances real-world language processing: evidence from a proofreading task.

    PubMed

    Brunyé, Tad T; Mahoney, Caroline R; Rapp, David N; Ditman, Tali; Taylor, Holly A

    2012-03-01

    Caffeine has become the most prevalently consumed psychostimulant in the world, but its influences on daily real-world functioning are relatively unknown. The present work investigated the effects of caffeine (0 mg, 100 mg, 200 mg, 400 mg) on a commonplace language task that required readers to identify and correct 4 error types in extended discourse: simple local errors (misspelling 1- to 2-syllable words), complex local errors (misspelling 3- to 5-syllable words), simple global errors (incorrect homophones), and complex global errors (incorrect subject-verb agreement and verb tense). In 2 placebo-controlled, double-blind studies using repeated-measures designs, we found higher detection and repair rates for complex global errors, asymptoting at 200 mg in low consumers (Experiment 1) and peaking at 400 mg in high consumers (Experiment 2). In both cases, covariate analyses demonstrated that arousal state mediated the relationship between caffeine consumption and the detection and repair of complex global errors. Detection and repair rates for the other 3 error types were not affected by caffeine consumption. Taken together, we demonstrate that caffeine has differential effects on error detection and repair as a function of dose and error type, and this relationship is closely tied to caffeine's effects on subjective arousal state. These results support the notion that central nervous system stimulants may enhance global processing of language-based materials and suggest that such effects may originate in caffeine-related right hemisphere brain processes. Implications for understanding the relationships between caffeine consumption and real-world cognitive functioning are discussed. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  18. Verifying Parentage and Confirming Identity in Blackberry with a Fingerprinting Set

    USDA-ARS?s Scientific Manuscript database

    Parentage and identity confirmation is an important aspect of clonally propagated crops outcrossing. Potential errors resulting misidentification include off-type pollination events, labeling errors, or sports of clones. DNA fingerprinting sets are an excellent solution to quickly identify off-type ...

  19. Distribution of the Determinant of the Sample Correlation Matrix: Monte Carlo Type One Error Rates.

    ERIC Educational Resources Information Center

    Reddon, John R.; And Others

    1985-01-01

    Computer sampling from a multivariate normal spherical population was used to evaluate the type one error rates for a test of sphericity based on the distribution of the determinant of the sample correlation matrix. (Author/LMO)

  20. Describing Phonological Paraphasias in Three Variants of Primary Progressive Aphasia.

    PubMed

    Dalton, Sarah Grace Hudspeth; Shultz, Christine; Henry, Maya L; Hillis, Argye E; Richardson, Jessica D

    2018-03-01

    The purpose of this study was to describe the linguistic environment of phonological paraphasias in 3 variants of primary progressive aphasia (semantic, logopenic, and nonfluent) and to describe the profiles of paraphasia production for each of these variants. Discourse samples of 26 individuals diagnosed with primary progressive aphasia were investigated for phonological paraphasias using the criteria established for the Philadelphia Naming Test (Moss Rehabilitation Research Institute, 2013). Phonological paraphasias were coded for paraphasia type, part of speech of the target word, target word frequency, type of segment in error, word position of consonant errors, type of error, and degree of change in consonant errors. Eighteen individuals across the 3 variants produced phonological paraphasias. Most paraphasias were nonword, followed by formal, and then mixed, with errors primarily occurring on nouns and verbs, with relatively few on function words. Most errors were substitutions, followed by addition and deletion errors, and few sequencing errors. Errors were evenly distributed across vowels, consonant singletons, and clusters, with more errors occurring in initial and medial positions of words than in the final position of words. Most consonant errors consisted of only a single-feature change, with few 2- or 3-feature changes. Importantly, paraphasia productions by variant differed from these aggregate results, with unique production patterns for each variant. These results suggest that a system where paraphasias are coded as present versus absent may be insufficient to adequately distinguish between the 3 subtypes of PPA. The 3 variants demonstrate patterns that may be used to improve phenotyping and diagnostic sensitivity. These results should be integrated with recent findings on phonological processing and speech rate. Future research should attempt to replicate these results in a larger sample of participants with longer speech samples and varied elicitation tasks. https://doi.org/10.23641/asha.5558107.

  1. Comparing interval estimates for small sample ordinal CFA models

    PubMed Central

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research. PMID:26579002

  2. Comparing interval estimates for small sample ordinal CFA models.

    PubMed

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research.

  3. Contribution of long-term accounting for raindrop size distribution variations on quantitative precipitation estimation by weather radar: Disdrometers vs parameter optimization

    NASA Astrophysics Data System (ADS)

    Hazenberg, P.; Uijlenhoet, R.; Leijnse, H.

    2015-12-01

    Volumetric weather radars provide information on the characteristics of precipitation at high spatial and temporal resolution. Unfortunately, rainfall measurements by radar are affected by multiple error sources, which can be subdivided into two main groups: 1) errors affecting the volumetric reflectivity measurements (e.g. ground clutter, vertical profile of reflectivity, attenuation, etc.), and 2) errors related to the conversion of the observed reflectivity (Z) values into rainfall intensity (R) and specific attenuation (k). Until the recent wide-scale implementation of dual-polarimetric radar, this second group of errors received relatively little attention, focusing predominantly on precipitation type-dependent Z-R and Z-k relations. The current work accounts for the impact of variations of the drop size distribution (DSD) on the radar QPE performance. We propose to link the parameters of the Z-R and Z-k relations directly to those of the normalized gamma DSD. The benefit of this procedure is that it reduces the number of unknown parameters. In this work, the DSD parameters are obtained using 1) surface observations from a Parsivel and Thies LPM disdrometer, and 2) a Monte Carlo optimization procedure using surface rain gauge observations. The impact of both approaches for a given precipitation type is assessed for 45 days of summertime precipitation observed within The Netherlands. Accounting for DSD variations using disdrometer observations leads to an improved radar QPE product as compared to applying climatological Z-R and Z-k relations. However, overall precipitation intensities are still underestimated. This underestimation is expected to result from unaccounted errors (e.g. transmitter calibration, erroneous identification of precipitation as clutter, overshooting and small-scale variability). In case the DSD parameters are optimized, the performance of the radar is further improved, resulting in the best performance of the radar QPE product. However, the resulting optimal Z-R and Z-k relations are considerably different from those obtained from disdrometer observations. As such, the best microphysical parameter set results in a minimization of the overall bias, which besides accounting for DSD variations also corrects for the impact of additional error sources.

  4. Randomised trial comparing the recording ability of a novel, electronic emergency documentation system with the AHA paper cardiac arrest record.

    PubMed

    Grigg, Eliot; Palmer, Andrew; Grigg, Jeffrey; Oppenheimer, Peter; Wu, Tim; Roesler, Axel; Nair, Bala; Ross, Brian

    2014-10-01

    To evaluate the ability of an electronic system created at the University of Washington to accurately document prerecorded VF and pulseless electrical activity (PEA) cardiac arrest scenarios compared with the American Heart Association paper cardiac arrest record. 16 anaesthesiology residents were randomly assigned to view one of two prerecorded, simulated VF and PEA scenarios and asked to document the event with either the paper or electronic system. Each subject then repeated the process with the other video and documentation method. Five types of documentation errors were defined: (1) omission, (2) specification, (3) timing, (4) commission and (5) noise. The mean difference in errors between the paper and electronic methods was analysed using a single factor repeated measures ANOVA model. Compared with paper records, the electronic system omitted 6.3 fewer events (95% CI -10.1 to -2.5, p=0.003), which represents a 28% reduction in omission errors. Users recorded 2.9 fewer noise items (95% CI -5.3 to -0.6, p=0.003) when compared with paper, representing a 36% decrease in redundant or irrelevant information. The rate of timing (Δ=-3.2, 95% CI -9.3 to 3.0, p=0.286) and commission (Δ=-4.4, 95% CI -9.4 to 0.5, p=0.075) errors were similar between the electronic system and paper, while the rate of specification errors were about a third lower for the electronic system when compared with the paper record (Δ=-3.2, 95% CI -6.3 to -0.2, p=0.037). Compared with paper documentation, documentation with the electronic system captured 24% more critical information during a simulated medical emergency without loss in data quality. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  5. Planned versus Unplanned Contrasts: Exactly Why Planned Contrasts Tend To Have More Power against Type II Error.

    ERIC Educational Resources Information Center

    Wang, Lin

    The literature is reviewed regarding the difference between planned contrasts, OVA and unplanned contrasts. The relationship between statistical power of a test method and Type I, Type II error rates is first explored to provide a framework for the discussion. The concepts and formulation of contrast, orthogonal and non-orthogonal contrasts are…

  6. Using voluntary reports from physicians to learn from diagnostic errors in emergency medicine.

    PubMed

    Okafor, Nnaemeka; Payne, Velma L; Chathampally, Yashwant; Miller, Sara; Doshi, Pratik; Singh, Hardeep

    2016-04-01

    Diagnostic errors are common in the emergency department (ED), but few studies have comprehensively evaluated their types and origins. We analysed incidents reported by ED physicians to determine disease conditions, contributory factors and patient harm associated with ED-related diagnostic errors. Between 1 March 2009 and 31 December 2013, ED physicians reported 509 incidents using a department-specific voluntary incident-reporting system that we implemented at two large academic hospital-affiliated EDs. For this study, we analysed 209 incidents related to diagnosis. A quality assurance team led by an ED physician champion reviewed each incident and interviewed physicians when necessary to confirm the presence/absence of diagnostic error and to determine the contributory factors. We generated descriptive statistics quantifying disease conditions involved, contributory factors and patient harm from errors. Among the 209 incidents, we identified 214 diagnostic errors associated with 65 unique diseases/conditions, including sepsis (9.6%), acute coronary syndrome (9.1%), fractures (8.6%) and vascular injuries (8.6%). Contributory factors included cognitive (n=317), system related (n=192) and non-remedial (n=106). Cognitive factors included faulty information verification (41.3%) and faulty information processing (30.6%) whereas system factors included high workload (34.4%) and inefficient ED processes (40.1%). Non-remediable factors included atypical presentation (31.3%) and the patients' inability to provide a history (31.3%). Most errors (75%) involved multiple factors. Major harm was associated with 34/209 (16.3%) of reported incidents. Most diagnostic errors in ED appeared to relate to common disease conditions. While sustaining diagnostic error reporting programmes might be challenging, our analysis reveals the potential value of such systems in identifying targets for improving patient safety in the ED. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  7. Systematic review of the evidence for Trails B cut-off scores in assessing fitness-to-drive.

    PubMed

    Roy, Mononita; Molnar, Frank

    2013-01-01

    Fitness-to-drive guidelines recommend employing the Trail Making B Test (a.k.a. Trails B), but do not provide guidance regarding cut-off scores. There is ongoing debate regarding the optimal cut-off score on the Trails B test. The objective of this study was to address this controversy by systematically reviewing the evidence for specific Trails B cut-off scores (e.g., cut-offs in both time to completion and number of errors) with respect to fitness-to-drive. Systematic review of all prospective cohort, retrospective cohort, case-control, correlation, and cross-sectional studies reporting the ability of the Trails B to predict driving safety that were published in English-language, peer-reviewed journals. Forty-seven articles were reviewed. None of the articles justified sample sizes via formal calculations. Cut-off scores reported based on research include: 90 seconds, 133 seconds, 147 seconds, 180 seconds, and < 3 errors. There is support for the previously published Trails B cut-offs of 3 minutes or 3 errors (the '3 or 3 rule'). Major methodological limitations of this body of research were uncovered including (1) lack of justification of sample size leaving studies open to Type II error (i.e., false negative findings), and (2) excessive focus on associations rather than clinically useful cut-off scores.

  8. Analyzing Genome-Wide Association Studies with an FDR Controlling Modification of the Bayesian Information Criterion

    PubMed Central

    Dolejsi, Erich; Bodenstorfer, Bernhard; Frommlet, Florian

    2014-01-01

    The prevailing method of analyzing GWAS data is still to test each marker individually, although from a statistical point of view it is quite obvious that in case of complex traits such single marker tests are not ideal. Recently several model selection approaches for GWAS have been suggested, most of them based on LASSO-type procedures. Here we will discuss an alternative model selection approach which is based on a modification of the Bayesian Information Criterion (mBIC2) which was previously shown to have certain asymptotic optimality properties in terms of minimizing the misclassification error. Heuristic search strategies are introduced which attempt to find the model which minimizes mBIC2, and which are efficient enough to allow the analysis of GWAS data. Our approach is implemented in a software package called MOSGWA. Its performance in case control GWAS is compared with the two algorithms HLASSO and d-GWASelect, as well as with single marker tests, where we performed a simulation study based on real SNP data from the POPRES sample. Our results show that MOSGWA performs slightly better than HLASSO, where specifically for more complex models MOSGWA is more powerful with only a slight increase in Type I error. On the other hand according to our simulations GWASelect does not at all control the type I error when used to automatically determine the number of important SNPs. We also reanalyze the GWAS data from the Wellcome Trust Case-Control Consortium and compare the findings of the different procedures, where MOSGWA detects for complex diseases a number of interesting SNPs which are not found by other methods. PMID:25061809

  9. Camera calibration correction in shape from inconsistent silhouette

    USDA-ARS?s Scientific Manuscript database

    The use of shape from silhouette for reconstruction tasks is plagued by two types of real-world errors: camera calibration error and silhouette segmentation error. When either error is present, we call the problem the Shape from Inconsistent Silhouette (SfIS) problem. In this paper, we show how sm...

  10. Attitude Determination Error Analysis System (ADEAS) mathematical specifications document

    NASA Technical Reports Server (NTRS)

    Nicholson, Mark; Markley, F.; Seidewitz, E.

    1988-01-01

    The mathematical specifications of Release 4.0 of the Attitude Determination Error Analysis System (ADEAS), which provides a general-purpose linear error analysis capability for various spacecraft attitude geometries and determination processes, are presented. The analytical basis of the system is presented. The analytical basis of the system is presented, and detailed equations are provided for both three-axis-stabilized and spin-stabilized attitude sensor models.

  11. Powerful Inference with the D-Statistic on Low-Coverage Whole-Genome Data.

    PubMed

    Soraggi, Samuele; Wiuf, Carsten; Albrechtsen, Anders

    2018-02-02

    The detection of ancient gene flow between human populations is an important issue in population genetics. A common tool for detecting ancient admixture events is the D-statistic. The D-statistic is based on the hypothesis of a genetic relationship that involves four populations, whose correctness is assessed by evaluating specific coincidences of alleles between the groups. When working with high-throughput sequencing data, calling genotypes accurately is not always possible; therefore, the D-statistic currently samples a single base from the reads of one individual per population. This implies ignoring much of the information in the data, an issue especially striking in the case of ancient genomes. We provide a significant improvement to overcome the problems of the D-statistic by considering all reads from multiple individuals in each population. We also apply type-specific error correction to combat the problems of sequencing errors, and show a way to correct for introgression from an external population that is not part of the supposed genetic relationship, and how this leads to an estimate of the admixture rate. We prove that the D-statistic is approximated by a standard normal distribution. Furthermore, we show that our method outperforms the traditional D-statistic in detecting admixtures. The power gain is most pronounced for low and medium sequencing depth (1-10×), and performances are as good as with perfectly called genotypes at a sequencing depth of 2×. We show the reliability of error correction in scenarios with simulated errors and ancient data, and correct for introgression in known scenarios to estimate the admixture rates. Copyright © 2018 Soraggi et al.

  12. On the timing problem in optical PPM communications.

    NASA Technical Reports Server (NTRS)

    Gagliardi, R. M.

    1971-01-01

    Investigation of the effects of imperfect timing in a direct-detection (noncoherent) optical system using pulse-position-modulation bits. Special emphasis is placed on specification of timing accuracy, and an examination of system degradation when this accuracy is not attained. Bit error probabilities are shown as a function of timing errors, from which average error probabilities can be computed for specific synchronization methods. Of significant importance is shown to be the presence of a residual, or irreducible error probability, due entirely to the timing system, that cannot be overcome by the data channel.

  13. Social contact patterns can buffer costs of forgetting in the evolution of cooperation.

    PubMed

    Stevens, Jeffrey R; Woike, Jan K; Schooler, Lael J; Lindner, Stefan; Pachur, Thorsten

    2018-06-13

    Analyses of the evolution of cooperation often rely on two simplifying assumptions: (i) individuals interact equally frequently with all social network members and (ii) they accurately remember each partner's past cooperation or defection. Here, we examine how more realistic, skewed patterns of contact-in which individuals interact primarily with only a subset of their network's members-influence cooperation. In addition, we test whether skewed contact patterns can counteract the decrease in cooperation caused by memory errors (i.e. forgetting). Finally, we compare two types of memory error that vary in whether forgotten interactions are replaced with random actions or with actions from previous encounters. We use evolutionary simulations of repeated prisoner's dilemma games that vary agents' contact patterns, forgetting rates and types of memory error. We find that highly skewed contact patterns foster cooperation and also buffer the detrimental effects of forgetting. The type of memory error used also influences cooperation rates. Our findings reveal previously neglected but important roles of contact pattern, type of memory error and the interaction of contact pattern and memory on cooperation. Although cognitive limitations may constrain the evolution of cooperation, social contact patterns can counteract some of these constraints. © 2018 The Author(s).

  14. The effect of the electronic transmission of prescriptions on dispensing errors and prescription enhancements made in English community pharmacies: a naturalistic stepped wedge study

    PubMed Central

    Franklin, Bryony Dean; Reynolds, Matthew; Sadler, Stacey; Hibberd, Ralph; Avery, Anthony J; Armstrong, Sarah J; Mehta, Rajnikant; Boyd, Matthew J; Barber, Nick

    2014-01-01

    Objectives To compare prevalence and types of dispensing errors and pharmacists’ labelling enhancements, for prescriptions transmitted electronically versus paper prescriptions. Design Naturalistic stepped wedge study. Setting 15 English community pharmacies. Intervention Electronic transmission of prescriptions between prescriber and pharmacy. Main outcome measures Prevalence of labelling errors, content errors and labelling enhancements (beneficial additions to the instructions), as identified by researchers visiting each pharmacy. Results Overall, we identified labelling errors in 5.4% of 16 357 dispensed items, and content errors in 1.4%; enhancements were made for 13.6%. Pharmacists also edited the label for a further 21.9% of electronically transmitted items. Electronically transmitted prescriptions had a higher prevalence of labelling errors (7.4% of 3733 items) than other prescriptions (4.8% of 12 624); OR 1.46 (95% CI 1.21 to 1.76). There was no difference for content errors or enhancements. The increase in labelling errors was mainly accounted for by errors (mainly at one pharmacy) involving omission of the indication, where specified by the prescriber, from the label. A sensitivity analysis in which these cases (n=158) were not considered errors revealed no remaining difference between prescription types. Conclusions We identified a higher prevalence of labelling errors for items transmitted electronically, but this was predominantly accounted for by local practice in a single pharmacy, independent of prescription type. Community pharmacists made labelling enhancements to about one in seven dispensed items, whether electronically transmitted or not. Community pharmacists, prescribers, professional bodies and software providers should work together to agree how items should be dispensed and labelled to best reap the benefits of electronically transmitted prescriptions. Community pharmacists need to ensure their computer systems are promptly updated to help reduce errors. PMID:24742778

  15. The effect of the electronic transmission of prescriptions on dispensing errors and prescription enhancements made in English community pharmacies: a naturalistic stepped wedge study.

    PubMed

    Franklin, Bryony Dean; Reynolds, Matthew; Sadler, Stacey; Hibberd, Ralph; Avery, Anthony J; Armstrong, Sarah J; Mehta, Rajnikant; Boyd, Matthew J; Barber, Nick

    2014-08-01

    To compare prevalence and types of dispensing errors and pharmacists' labelling enhancements, for prescriptions transmitted electronically versus paper prescriptions. Naturalistic stepped wedge study. 15 English community pharmacies. Electronic transmission of prescriptions between prescriber and pharmacy. Prevalence of labelling errors, content errors and labelling enhancements (beneficial additions to the instructions), as identified by researchers visiting each pharmacy. Overall, we identified labelling errors in 5.4% of 16,357 dispensed items, and content errors in 1.4%; enhancements were made for 13.6%. Pharmacists also edited the label for a further 21.9% of electronically transmitted items. Electronically transmitted prescriptions had a higher prevalence of labelling errors (7.4% of 3733 items) than other prescriptions (4.8% of 12,624); OR 1.46 (95% CI 1.21 to 1.76). There was no difference for content errors or enhancements. The increase in labelling errors was mainly accounted for by errors (mainly at one pharmacy) involving omission of the indication, where specified by the prescriber, from the label. A sensitivity analysis in which these cases (n=158) were not considered errors revealed no remaining difference between prescription types. We identified a higher prevalence of labelling errors for items transmitted electronically, but this was predominantly accounted for by local practice in a single pharmacy, independent of prescription type. Community pharmacists made labelling enhancements to about one in seven dispensed items, whether electronically transmitted or not. Community pharmacists, prescribers, professional bodies and software providers should work together to agree how items should be dispensed and labelled to best reap the benefits of electronically transmitted prescriptions. Community pharmacists need to ensure their computer systems are promptly updated to help reduce errors. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  16. Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.

    ERIC Educational Resources Information Center

    Olejnik, Stephen F.; Algina, James

    1987-01-01

    Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)

  17. Alcohol and liver cirrhosis mortality in the United States: comparison of methods for the analyses of time-series panel data models.

    PubMed

    Ye, Yu; Kerr, William C

    2011-01-01

    To explore various model specifications in estimating relationships between liver cirrhosis mortality rates and per capita alcohol consumption in aggregate-level cross-section time-series data. Using a series of liver cirrhosis mortality rates from 1950 to 2002 for 47 U.S. states, the effects of alcohol consumption were estimated from pooled autoregressive integrated moving average (ARIMA) models and 4 types of panel data models: generalized estimating equation, generalized least square, fixed effect, and multilevel models. Various specifications of error term structure under each type of model were also examined. Different approaches controlling for time trends and for using concurrent or accumulated consumption as predictors were also evaluated. When cirrhosis mortality was predicted by total alcohol, highly consistent estimates were found between ARIMA and panel data analyses, with an average overall effect of 0.07 to 0.09. Less consistent estimates were derived using spirits, beer, and wine consumption as predictors. When multiple geographic time series are combined as panel data, none of existent models could accommodate all sources of heterogeneity such that any type of panel model must employ some form of generalization. Different types of panel data models should thus be estimated to examine the robustness of findings. We also suggest cautious interpretation when beverage-specific volumes are used as predictors. Copyright © 2010 by the Research Society on Alcoholism.

  18. Analysis of behavior of focusing error signals generated by astigmatic method when a focused spot moves beyond the radius of a land-groove-type optical disk

    NASA Astrophysics Data System (ADS)

    Shinoda, Masahisa; Nakatani, Hidehiko; Nakai, Kenya; Ohmaki, Masayuki

    2015-09-01

    We theoretically calculate behaviors of focusing error signals generated by an astigmatic method in a land-groove-type optical disk. The focusing error signal from the land does not coincide with that from the groove. This behavior is enhanced when a focused spot of an optical pickup moves beyond the radius of the optical disk. A gain difference between the slope sensitivities of focusing error signals from the land and the groove is an important factor with respect to stable focusing servo control. In our calculation, the format of digital versatile disc-random access memory (DVD-RAM) is adopted as the land-groove-type optical disk model, and the dependences of the gain difference on various factors are investigated. The gain difference strongly depends on the optical intensity distribution of the laser beam in the optical pickup. The calculation method and results in this paper will be reflected in newly developed land-groove-type optical disks.

  19. Accounting for Relatedness in Family Based Genetic Association Studies

    PubMed Central

    McArdle, P.F.; O’Connell, J.R.; Pollin, T.I.; Baumgarten, M.; Shuldiner, A.R.; Peyser, P.A.; Mitchell, B.D.

    2007-01-01

    Objective Assess the differences in point estimates, power and type 1 error rates when accounting for and ignoring family structure in genetic tests of association. Methods We compare by simulation the performance of analytic models using variance components to account for family structure and regression models that ignore relatedness for a range of possible family based study designs (i.e., sib pairs vs. large sibships vs. nuclear families vs. extended families). Results Our analyses indicate that effect size estimates and power are not significantly affected by ignoring family structure. Type 1 error rates increase when family structure is ignored, as density of family structures increases, and as trait heritability increases. For discrete traits with moderate levels of heritability and across many common sampling designs, type 1 error rates rise from a nominal 0.05 to 0.11. Conclusion Ignoring family structure may be useful in screening although it comes at a cost of a increased type 1 error rate, the magnitude of which depends on trait heritability and pedigree configuration. PMID:17570925

  20. Assessing explicit error reporting in the narrative electronic medical record using keyword searching.

    PubMed

    Cao, Hui; Stetson, Peter; Hripcsak, George

    2003-01-01

    Many types of medical errors occur in and outside of hospitals, some of which have very serious consequences and increase cost. Identifying errors is a critical step for managing and preventing them. In this study, we assessed the explicit reporting of medical errors in the electronic record. We used five search terms "mistake," "error," "incorrect," "inadvertent," and "iatrogenic" to survey several sets of narrative reports including discharge summaries, sign-out notes, and outpatient notes from 1991 to 2000. We manually reviewed all the positive cases and identified them based on the reporting of physicians. We identified 222 explicitly reported medical errors. The positive predictive value varied with different keywords. In general, the positive predictive value for each keyword was low, ranging from 3.4 to 24.4%. Therapeutic-related errors were the most common reported errors and these reported therapeutic-related errors were mainly medication errors. Keyword searches combined with manual review indicated some medical errors that were reported in medical records. It had a low sensitivity and a moderate positive predictive value, which varied by search term. Physicians were most likely to record errors in the Hospital Course and History of Present Illness sections of discharge summaries. The reported errors in medical records covered a broad range and were related to several types of care providers as well as non-health care professionals.

  1. Computational neuroanatomy: mapping cell-type densities in the mouse brain, simulations from the Allen Brain Atlas

    NASA Astrophysics Data System (ADS)

    Grange, Pascal

    2015-09-01

    The Allen Brain Atlas of the adult mouse (ABA) consists of digitized expression profiles of thousands of genes in the mouse brain, co-registered to a common three-dimensional template (the Allen Reference Atlas).This brain-wide, genome-wide data set has triggered a renaissance in neuroanatomy. Its voxelized version (with cubic voxels of side 200 microns) is available for desktop computation in MATLAB. On the other hand, brain cells exhibit a great phenotypic diversity (in terms of size, shape and electrophysiological activity), which has inspired the names of some well-studied cell types, such as granule cells and medium spiny neurons. However, no exhaustive taxonomy of brain cell is available. A genetic classification of brain cells is being undertaken, and some cell types have been chraracterized by their transcriptome profiles. However, given a cell type characterized by its transcriptome, it is not clear where else in the brain similar cells can be found. The ABA can been used to solve this region-specificity problem in a data-driven way: rewriting the brain-wide expression profiles of all genes in the atlas as a sum of cell-type-specific transcriptome profiles is equivalent to solving a quadratic optimization problem at each voxel in the brain. However, the estimated brain-wide densities of 64 cell types published recently were based on one series of co-registered coronal in situ hybridization (ISH) images per gene, whereas the online ABA contains several image series per gene, including sagittal ones. In the presented work, we simulate the variability of cell-type densities in a Monte Carlo way by repeatedly drawing a random image series for each gene and solving the optimization problem. This yields error bars on the region-specificity of cell types.

  2. Statistical aspects of the TNK-S2B trial of tenecteplase versus alteplase in acute ischemic stroke: an efficient, dose-adaptive, seamless phase II/III design.

    PubMed

    Levin, Bruce; Thompson, John L P; Chakraborty, Bibhas; Levy, Gilberto; MacArthur, Robert; Haley, E Clarke

    2011-08-01

    TNK-S2B, an innovative, randomized, seamless phase II/III trial of tenecteplase versus rt-PA for acute ischemic stroke, terminated for slow enrollment before regulatory approval of use of phase II patients in phase III. (1) To review the trial design and comprehensive type I error rate simulations and (2) to discuss issues raised during regulatory review, to facilitate future approval of similar designs. In phase II, an early (24-h) outcome and adaptive sequential procedure selected one of three tenecteplase doses for phase III comparison with rt-PA. Decision rules comparing this dose to rt-PA would cause stopping for futility at phase II end, or continuation to phase III. Phase III incorporated two co-primary hypotheses, allowing for a treatment effect at either end of the trichotomized Rankin scale. Assuming no early termination, four interim analyses and one final analysis of 1908 patients provided an experiment-wise type I error rate of <0.05. Over 1,000 distribution scenarios, each involving 40,000 replications, the maximum type I error in phase III was 0.038. Inflation from the dose selection was more than offset by the one-half continuity correction in the test statistics. Inflation from repeated interim analyses was more than offset by the reduction from the clinical stopping rules for futility at the first interim analysis. Design complexity and evolving regulatory requirements lengthened the review process. (1) The design was innovative and efficient. Per protocol, type I error was well controlled for the co-primary phase III hypothesis tests, and experiment-wise. (2a) Time must be allowed for communications with regulatory reviewers from first design stages. (2b) Adequate type I error control must be demonstrated. (2c) Greater clarity is needed on (i) whether this includes demonstration of type I error control if the protocol is violated and (ii) whether simulations of type I error control are acceptable. (2d) Regulatory agency concerns that protocols for futility stopping may not be followed may be allayed by submitting interim analysis results to them as these analyses occur.

  3. Identification of driver errors : overview and recommendations

    DOT National Transportation Integrated Search

    2002-08-01

    Driver error is cited as a contributing factor in most automobile crashes, and although estimates vary by source, driver error is cited as the principal cause of from 45 to 75 percent of crashes. However, the specific errors that lead to crashes, and...

  4. The distribution of refractive errors among children attending Lumbini Eye Institute, Nepal.

    PubMed

    Rai, S; Thapa, H B; Sharma, M K; Dhakhwa, K; Karki, R

    2012-01-01

    Uncorrected refractive error is an important cause of childhood blindness and visual impairment. To describe the patterns of refractive errors among children attending the outpatient clinic at the Department of Pediatric Ophthalmology, Lumbini Eye Institute, Bhairahawa, Nepal. Records of 133 children with refractive errors aged 5 - 15 years from both the urban and rural areas of Nepal and the adjacent territory of India attending the hospital between September and November 2010 were examined for patterns of refractive errors. The SPSS statistical software was used to perform data analysis. The commonest type of refractive error among the children was astigmatism (47 %) followed by myopia (34 %) and hyperopia (15 %). The refractive error was more prevalent among children of both the genders of age group 11-15 years as compared to their younger counterparts (RR = 1.22, 95 % CI = 0.66 - 2.25). The refractive error was more common (70 %) in the rural than the urban children (26 %). The rural females had a higher (38 %) prevalence of myopia than urban females (18 %). Among the children with refractive errors, only 57 % were using spectacles at the initial presentation. Astigmatism is the commonest type of refractive error among the children of age 5 - 15 years followed by hypermetropia and myopia. Refractive error remains uncorrected in a significant number of children. © NEPjOPH.

  5. Does applying technology throughout the medication use process improve patient safety with antineoplastics?

    PubMed

    Bubalo, Joseph; Warden, Bruce A; Wiegel, Joshua J; Nishida, Tess; Handel, Evelyn; Svoboda, Leanne M; Nguyen, Lam; Edillo, P Neil

    2014-12-01

    Medical errors, in particular medication errors, continue to be a troublesome factor in the delivery of safe and effective patient care. Antineoplastic agents represent a group of medications highly susceptible to medication errors due to their complex regimens and narrow therapeutic indices. As the majority of these medication errors are frequently associated with breakdowns in poorly defined systems, developing technologies and evolving workflows seem to be a logical approach to provide added safeguards against medication errors. This article will review both the pros and cons of today's technologies and their ability to simplify the medication use process, reduce medication errors, improve documentation, improve healthcare costs and increase provider efficiency as relates to the use of antineoplastic therapy throughout the medication use process. Several technologies, mainly computerized provider order entry (CPOE), barcode medication administration (BCMA), smart pumps, electronic medication administration record (eMAR), and telepharmacy, have been well described and proven to reduce medication errors, improve adherence to quality metrics, and/or improve healthcare costs in a broad scope of patients. The utilization of these technologies during antineoplastic therapy is weak at best and lacking for most. Specific to the antineoplastic medication use system, the only technology with data to adequately support a claim of reduced medication errors is CPOE. In addition to the benefits these technologies can provide, it is also important to recognize their potential to induce new types of errors and inefficiencies which can negatively impact patient care. The utilization of technology reduces but does not eliminate the potential for error. The evidence base to support technology in preventing medication errors is limited in general but even more deficient in the realm of antineoplastic therapy. Though CPOE has the best evidence to support its use in the antineoplastic population, benefit from many other technologies may have to be inferred based on data from other patient populations. As health systems begin to widely adopt and implement new technologies it is important to critically assess their effectiveness in improving patient safety. © The Author(s) 2013 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  6. Improved estimation of heavy rainfall by weather radar after reflectivity correction and accounting for raindrop size distribution variability

    NASA Astrophysics Data System (ADS)

    Hazenberg, Pieter; Leijnse, Hidde; Uijlenhoet, Remko

    2015-04-01

    Between 25 and 27 August 2010 a long-duration mesoscale convective system was observed above the Netherlands, locally giving rise to rainfall accumulations exceeding 150 mm. Correctly measuring the amount of precipitation during such an extreme event is important, both from a hydrological and meteorological perspective. Unfortunately, the operational weather radar measurements were affected by multiple sources of error and only 30% of the precipitation observed by rain gauges was estimated. Such an underestimation of heavy rainfall, albeit generally less strong than in this extreme case, is typical for operational weather radar in The Netherlands. In general weather radar measurement errors can be subdivided into two groups: (1) errors affecting the volumetric reflectivity measurements (e.g. ground clutter, radar calibration, vertical profile of reflectivity) and (2) errors resulting from variations in the raindrop size distribution that in turn result in incorrect rainfall intensity and attenuation estimates from observed reflectivity measurements. A stepwise procedure to correct for the first group of errors leads to large improvements in the quality of the estimated precipitation, increasing the radar rainfall accumulations to about 65% of those observed by gauges. To correct for the second group of errors, a coherent method is presented linking the parameters of the radar reflectivity-rain rate (Z-R) and radar reflectivity-specific attenuation (Z-k) relationships to the normalized drop size distribution (DSD). Two different procedures were applied. First, normalized DSD parameters for the whole event and for each precipitation type separately (convective, stratiform and undefined) were obtained using local disdrometer observations. Second, 10,000 randomly generated plausible normalized drop size distributions were used for rainfall estimation, to evaluate whether this Monte Carlo method would improve the quality of weather radar rainfall products. Using the disdrometer information, the best results were obtained in case no differentiation between precipitation type (convective, stratiform and undefined) was made, increasing the event accumulations to more than 80% of those observed by gauges. For the randomly optimized procedure, radar precipitation estimates further improve and closely resemble observations in case one differentiates between precipitation type. However, the optimal parameter sets are very different from those derived from disdrometer observations. It is therefore questionable if single disdrometer observations are suitable for large-scale quantitative precipitation estimation, especially if the disdrometer is located relatively far away from the main rain event, which was the case in this study. In conclusion, this study shows the benefit of applying detailed error correction methods to improve the quality of the weather radar product, but also confirms the need to be cautious using locally obtained disdrometer measurements.

  7. The impact of reflectivity correction and accounting for raindrop size distribution variability to improve precipitation estimation by weather radar for an extreme low-land mesoscale convective system

    NASA Astrophysics Data System (ADS)

    Hazenberg, Pieter; Leijnse, Hidde; Uijlenhoet, Remko

    2014-11-01

    Between 25 and 27 August 2010 a long-duration mesoscale convective system was observed above the Netherlands, locally giving rise to rainfall accumulations exceeding 150 mm. Correctly measuring the amount of precipitation during such an extreme event is important, both from a hydrological and meteorological perspective. Unfortunately, the operational weather radar measurements were affected by multiple sources of error and only 30% of the precipitation observed by rain gauges was estimated. Such an underestimation of heavy rainfall, albeit generally less strong than in this extreme case, is typical for operational weather radar in The Netherlands. In general weather radar measurement errors can be subdivided into two groups: (1) errors affecting the volumetric reflectivity measurements (e.g. ground clutter, radar calibration, vertical profile of reflectivity) and (2) errors resulting from variations in the raindrop size distribution that in turn result in incorrect rainfall intensity and attenuation estimates from observed reflectivity measurements. A stepwise procedure to correct for the first group of errors leads to large improvements in the quality of the estimated precipitation, increasing the radar rainfall accumulations to about 65% of those observed by gauges. To correct for the second group of errors, a coherent method is presented linking the parameters of the radar reflectivity-rain rate (Z - R) and radar reflectivity-specific attenuation (Z - k) relationships to the normalized drop size distribution (DSD). Two different procedures were applied. First, normalized DSD parameters for the whole event and for each precipitation type separately (convective, stratiform and undefined) were obtained using local disdrometer observations. Second, 10,000 randomly generated plausible normalized drop size distributions were used for rainfall estimation, to evaluate whether this Monte Carlo method would improve the quality of weather radar rainfall products. Using the disdrometer information, the best results were obtained in case no differentiation between precipitation type (convective, stratiform and undefined) was made, increasing the event accumulations to more than 80% of those observed by gauges. For the randomly optimized procedure, radar precipitation estimates further improve and closely resemble observations in case one differentiates between precipitation type. However, the optimal parameter sets are very different from those derived from disdrometer observations. It is therefore questionable if single disdrometer observations are suitable for large-scale quantitative precipitation estimation, especially if the disdrometer is located relatively far away from the main rain event, which was the case in this study. In conclusion, this study shows the benefit of applying detailed error correction methods to improve the quality of the weather radar product, but also confirms the need to be cautious using locally obtained disdrometer measurements.

  8. On low-frequency errors of uniformly modulated filtered white-noise models for ground motions

    USGS Publications Warehouse

    Safak, Erdal; Boore, David M.

    1988-01-01

    Low-frequency errors of a commonly used non-stationary stochastic model (uniformly modulated filtered white-noise model) for earthquake ground motions are investigated. It is shown both analytically and by numerical simulation that uniformly modulated filter white-noise-type models systematically overestimate the spectral response for periods longer than the effective duration of the earthquake, because of the built-in low-frequency errors in the model. The errors, which are significant for low-magnitude short-duration earthquakes, can be eliminated by using the filtered shot-noise-type models (i. e. white noise, modulated by the envelope first, and then filtered).

  9. Characteristics of pediatric chemotherapy medication errors in a national error reporting database.

    PubMed

    Rinke, Michael L; Shore, Andrew D; Morlock, Laura; Hicks, Rodney W; Miller, Marlene R

    2007-07-01

    Little is known regarding chemotherapy medication errors in pediatrics despite studies suggesting high rates of overall pediatric medication errors. In this study, the authors examined patterns in pediatric chemotherapy errors. The authors queried the United States Pharmacopeia MEDMARX database, a national, voluntary, Internet-accessible error reporting system, for all error reports from 1999 through 2004 that involved chemotherapy medications and patients aged <18 years. Of the 310 pediatric chemotherapy error reports, 85% reached the patient, and 15.6% required additional patient monitoring or therapeutic intervention. Forty-eight percent of errors originated in the administering phase of medication delivery, and 30% originated in the drug-dispensing phase. Of the 387 medications cited, 39.5% were antimetabolites, 14.0% were alkylating agents, 9.3% were anthracyclines, and 9.3% were topoisomerase inhibitors. The most commonly involved chemotherapeutic agents were methotrexate (15.3%), cytarabine (12.1%), and etoposide (8.3%). The most common error types were improper dose/quantity (22.9% of 327 cited error types), wrong time (22.6%), omission error (14.1%), and wrong administration technique/wrong route (12.2%). The most common error causes were performance deficit (41.3% of 547 cited error causes), equipment and medication delivery devices (12.4%), communication (8.8%), knowledge deficit (6.8%), and written order errors (5.5%). Four of the 5 most serious errors occurred at community hospitals. Pediatric chemotherapy errors often reached the patient, potentially were harmful, and differed in quality between outpatient and inpatient areas. This study indicated which chemotherapeutic agents most often were involved in errors and that administering errors were common. Investigation is needed regarding targeted medication administration safeguards for these high-risk medications. Copyright (c) 2007 American Cancer Society.

  10. Answering Contextually Demanding Questions: Pragmatic Errors Produced by Children with Asperger Syndrome or High-Functioning Autism

    ERIC Educational Resources Information Center

    Loukusa, Soile; Leinonen, Eeva; Jussila, Katja; Mattila, Marja-Leena; Ryder, Nuala; Ebeling, Hanna; Moilanen, Irma

    2007-01-01

    This study examined irrelevant/incorrect answers produced by children with Asperger syndrome or high-functioning autism (7-9-year-olds and 10-12-year-olds) and normally developing children (7-9-year-olds). The errors produced were divided into three types: in Type 1, the child answered the original question incorrectly, in Type 2, the child gave a…

  11. Evaluation of HACCP Plans of Food Industries: Case Study Conducted by the Servizio di Igiene degli Alimenti e della Nutrizione (Food and Nutrition Health Service) of the Local Health Authority of Foggia, Italy

    PubMed Central

    Panunzio, Michele F.; Antoniciello, Antonietta; Pisano, Alessandra; Rosa, Giovanna

    2007-01-01

    With respect to food safety, many works have studied the effectiveness of self-monitoring plans of food companies, designed using the Hazard Analysis and Critical Control Point (HACCP) method. On the other hand, in-depth research has not been made concerning the adherence of the plans to HACCP standards. During our research, we evaluated 116 self-monitoring plans adopted by food companies located in the territory of the Local Health Authority (LHA) of Foggia, Italy. The general errors (terminology, philosophy and redundancy) and the specific errors (transversal plan, critical limits, hazard specificity, and lack of procedures) were standardized. Concerning the general errors, terminological errors pertain to half the plans examined, 47% include superfluous elements and 60% have repetitive subjects. With regards to the specific errors, 77% of the plans examined contained specific errors. The evaluation has pointed out the lack of comprehension of the HACCP system by the food companies and has allowed the Servizio di Igiene degli Alimenti e della Nutrizione (Food and Nutrition Health Service), in its capacity as a control body, to intervene with the companies in order to improve designing HACCP plans. PMID:17911662

  12. Acoustic Evidence for Phonologically Mismatched Speech Errors

    ERIC Educational Resources Information Center

    Gormley, Andrea

    2015-01-01

    Speech errors are generally said to accommodate to their new phonological context. This accommodation has been validated by several transcription studies. The transcription methodology is not the best choice for detecting errors at this level, however, as this type of error can be difficult to perceive. This paper presents an acoustic analysis of…

  13. Grammatical versus Pragmatic Error: Employer Perceptions of Nonnative and Native English Speakers

    ERIC Educational Resources Information Center

    Wolfe, Joanna; Shanmugaraj, Nisha; Sipe, Jaclyn

    2016-01-01

    Many communication instructors make allowances for grammatical error in nonnative English speakers' writing, but do businesspeople do the same? We asked 169 businesspeople to comment on three versions of an email with different types of errors. We found that businesspeople do make allowances for errors made by nonnative English speakers,…

  14. Maximum inflation of the type 1 error rate when sample size and allocation rate are adapted in a pre-planned interim look.

    PubMed

    Graf, Alexandra C; Bauer, Peter

    2011-06-30

    We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.

  15. Population-based assessment of sensitivity and specificity of a pinhole for detection of significant refractive errors in the community.

    PubMed

    Marmamula, Srinivas; Keeffe, Jill E; Narsaiah, Saggam; Khanna, Rohit C; Rao, Gullapalli N

    2014-11-01

    Measurements of refractive errors through subjective or automated refraction are not always possible in rapid assessment studies and community vision screening programs; however, measurements of vision with habitual correction and with a pinhole can easily be made. Although improvements in vision with a pinhole are assumed to mean that a refractive error is present, no studies have investigated the magnitude of improvement in vision with pinhole that is predictive of refractive error. The aim was to measure the sensitivity and specificity of 'vision improvement with pinhole' in predicting the presence of refractive error in a community setting. Vision and vision with pinhole were measured using a logMAR chart for 488 of 582 individuals aged 15 to 50 years. Refractive errors were measured using non-cycloplegic autorefraction and subjective refraction. The presence of refractive error was defined using spherical equivalent refraction (SER) at two levels: SER greater than ± 0.50 D sphere (DS) and SER greater than ±1.00 DS. Three definitions for significant improvement in vision with a pinhole were used: 1. Presenting vision less than 6/12 and improving to 6/12 or better, 2. Improvement in vision of more than one logMAR line and 3. Improvement in vision of more than two logMAR lines. For refractive error defined as spherical equivalent refraction greater than ± 0.50 DS, the sensitivities and specificities for the pinhole test predicting the presence of refractive error were 83.9 per cent (95% CI: 74.5 to 90.9) and 98.8 per cent (95% CI: 97.1 to 99.6), respectively for definition 1. Definition 2 had a sensitivity 89.7 per cent (95% CI: 81.3 to 95.2) and specificity 88.0 per cent (95% CI: 4.4 to 91.0). Definition 3 had a sensitivity of 75.9 per cent (95% CI: 65.5 to 84.4) and specificity of 97.8 per cent (95% CI: 95.8 to 99.0). Similar results were found with spherical equivalent refraction greater than ±1.00 DS, when tested against the three pinhole-based definitions. Refractive error definitions based on improvement in vision with the pinhole shows good sensitivity and specificity at predicting the presence of significant refractive errors. These definitions can be used in rapid assessment surveys and community-based vision screenings. © 2014 The Authors. Clinical and Experimental Optometry © 2014 Optometrists Association Australia.

  16. Refractive errors in Mercyland Specialist Hospital, Osogbo, Western Nigeria.

    PubMed

    Adeoti, C O; Egbewale, B E

    2008-06-01

    The study was conducted to determine the magnitude and pattern of refractive errors in order to provide facilities for its management. A prospective study of 3601 eyes of 1824 consective patients was conducted. Information obtained included age, sex, occupation, visual acuity, type and degree of refractive error. The data was analysed using Statistical Package for Social Sciences 11.0 version) Computer Software. Refractive error was found in 1824(53.71%) patients. There were 832(45.61%) males and 992(54.39%) females with a mean age of 35.55. Myopia was the commonest (1412(39.21% eyes). Others include hypermetropia (840(23.33% eyes), astigmatism (785(21.80%) and 820 patients (1640 eyes) had presbyopia. Anisometropia was present in 791(44.51%) of 1777 patients that had bilateral refractive errors. Two thousand two hundred and fifty two eyes has spherical errors. Out of 2252 eyes with spherical errors, 1308 eyes (58.08%) had errors -0.50 to +0.50 dioptres, 567 eyes (25.18%) had errors less than -0.50 dioptres of whom 63 eyes (2.80%) had errors less than -5.00 dioptres while 377 eyes (16.74%) had errors greater than +0.50 dioptres of whom 81 eyes (3.60%) had errors greater than +2.00 dioptres. The highest error was 20.00 dioptres for myopia and 18.00 dioptres for hypermetropia. Refractive error is common in this environment. Adequate provision should be made for its correction bearing in mind the common types and degrees.

  17. Using EHR Data to Detect Prescribing Errors in Rapidly Discontinued Medication Orders.

    PubMed

    Burlison, Jonathan D; McDaniel, Robert B; Baker, Donald K; Hasan, Murad; Robertson, Jennifer J; Howard, Scott C; Hoffman, James M

    2018-01-01

    Previous research developed a new method for locating prescribing errors in rapidly discontinued electronic medication orders. Although effective, the prospective design of that research hinders its feasibility for regular use. Our objectives were to assess a method to retrospectively detect prescribing errors, to characterize the identified errors, and to identify potential improvement opportunities. Electronically submitted medication orders from 28 randomly selected days that were discontinued within 120 minutes of submission were reviewed and categorized as most likely errors, nonerrors, or not enough information to determine status. Identified errors were evaluated by amount of time elapsed from original submission to discontinuation, error type, staff position, and potential clinical significance. Pearson's chi-square test was used to compare rates of errors across prescriber types. In all, 147 errors were identified in 305 medication orders. The method was most effective for orders that were discontinued within 90 minutes. Duplicate orders were most common; physicians in training had the highest error rate ( p  < 0.001), and 24 errors were potentially clinically significant. None of the errors were voluntarily reported. It is possible to identify prescribing errors in rapidly discontinued medication orders by using retrospective methods that do not require interrupting prescribers to discuss order details. Future research could validate our methods in different clinical settings. Regular use of this measure could help determine the causes of prescribing errors, track performance, and identify and evaluate interventions to improve prescribing systems and processes. Schattauer GmbH Stuttgart.

  18. Wideband 1.064 micrometer detector evaluation

    NASA Technical Reports Server (NTRS)

    Green, S. I.

    1975-01-01

    The performance of several candidate detectors for use as communications detectors in a 400 Mbps 1.064 micrometers laser communication system was evaluated. The results of communication system Bit Error Rate (BER) testing for the best detector of each type are summarized. Complete testing data of each type detector is presented. The 400 Mbps 1.064 micrometers communication system receiver test bed is described. The best communication system results for each detector type are included. Performance comparisons are made at 0.000001 BER, the specification level chosen for satellite laser communication links. The data is presented in two groups. The first indicates the best performance levels that can be expected on normal space laser communication system operation. The second cites the best performance levels which can be achieved by focusing the signal to diffraction limited spots on the photosensitive area.

  19. Optimal Multi-Type Sensor Placement for Structural Identification by Static-Load Testing

    PubMed Central

    Papadopoulou, Maria; Vernay, Didier; Smith, Ian F. C.

    2017-01-01

    Assessing ageing infrastructure is a critical challenge for civil engineers due to the difficulty in the estimation and integration of uncertainties in structural models. Field measurements are increasingly used to improve knowledge of the real behavior of a structure; this activity is called structural identification. Error-domain model falsification (EDMF) is an easy-to-use model-based structural-identification methodology which robustly accommodates systematic uncertainties originating from sources such as boundary conditions, numerical modelling and model fidelity, as well as aleatory uncertainties from sources such as measurement error and material parameter-value estimations. In most practical applications of structural identification, sensors are placed using engineering judgment and experience. However, since sensor placement is fundamental to the success of structural identification, a more rational and systematic method is justified. This study presents a measurement system design methodology to identify the best sensor locations and sensor types using information from static-load tests. More specifically, three static-load tests were studied for the sensor system design using three types of sensors for a performance evaluation of a full-scale bridge in Singapore. Several sensor placement strategies are compared using joint entropy as an information-gain metric. A modified version of the hierarchical algorithm for sensor placement is proposed to take into account mutual information between load tests. It is shown that a carefully-configured measurement strategy that includes multiple sensor types and several load tests maximizes information gain. PMID:29240684

  20. ZZ-Type a posteriori error estimators for adaptive boundary element methods on a curve☆

    PubMed Central

    Feischl, Michael; Führer, Thomas; Karkulik, Michael; Praetorius, Dirk

    2014-01-01

    In the context of the adaptive finite element method (FEM), ZZ-error estimators named after Zienkiewicz and Zhu (1987) [52] are mathematically well-established and widely used in practice. In this work, we propose and analyze ZZ-type error estimators for the adaptive boundary element method (BEM). We consider weakly singular and hyper-singular integral equations and prove, in particular, convergence of the related adaptive mesh-refining algorithms. Throughout, the theoretical findings are underlined by numerical experiments. PMID:24748725

  1. Patient-centered consumer health social network websites: a pilot study of quality of user-generated health information.

    PubMed

    Tsai, Christopher C; Tsai, Sarai H; Zeng-Treitler, Qing; Liang, Bryan A

    2007-10-11

    The quality of user-generated health information on consumer health social networking websites has not been studied. We collected a set of postings related to Diabetes Mellitus Type I from three such sites and classified them based on accuracy, error type, and clinical significance of error. We found 48% of postings contained medical content, and 54% of these were either incomplete or contained errors. About 85% of the incomplete and erroneous messages were potentially clinically significant.

  2. Evaluation of Phantom-Based Education System for Acupuncture Manipulation

    PubMed Central

    Lee, In-Seon; Lee, Ye-Seul; Park, Hi-Joon; Lee, Hyejung; Chae, Younbyoung

    2015-01-01

    Background Although acupuncture manipulation has been regarded as one of the important factors in clinical outcome, it has been difficult to train novice students to become skillful experts due to a lack of adequate educational program and tools. Objectives In the present study, we investigated whether newly developed phantom acupoint tools would be useful to practice-naïve acupuncture students for practicing the three different types of acupuncture manipulation to enhance their skills. Methods We recruited 12 novice students and had them practice acupuncture manipulations on the phantom acupoint (5% agarose gel). We used the Acusensor 2 and compared their acupuncture manipulation techniques, for which the target criteria were depth and time factors, at acupoint LI11 in the human body before and after 10 training sessions. The outcomes were depth of needle insertion, depth error from target criterion, time of rotating, lifting, and thrusting, time error from target criteria and the time ratio. Results After 10 training sessions, the students showed significantly improved outcomes in depth of needle, depth error (rotation, reducing lifting/thrusting), thumb-forward time error, thumb-backward time error (rotation), and lifting time (reinforcing lifting/thrusting). Conclusions The phantom acupoint tool could be useful in a phantom-based education program for acupuncture-manipulation training for students. For advanced education programs for acupuncture manipulation, we will need to collect additional information, such as patient responses, acupoint-specific anatomical characteristics, delicate tissue-like modeling, haptic and visual feedback, and data from an acupuncture practice simulator. PMID:25689598

  3. No Substitute for Going to the Field: Correcting Lidar DEMs in Salt Marshes

    NASA Astrophysics Data System (ADS)

    Renken, K.; Morris, J. T.; Lynch, J.; Bayley, H.; Neil, A.; Rasmussen, S.; Tyrrell, M.; Tanis, M.

    2016-12-01

    Models that forecast the response of salt marshes to current and future trends in sea level rise increasingly are used to guide management of these vulnerable ecosystems. Lidar-derived DEMs serve as the foundation for modeling landform change. However, caution is advised when using these DEMs as the starting point for models of salt marsh evolution. While broad vegetation class (i.e., young forest, old forest, grasslands, desert, etc.) has proven to be a significant predictor of vertical displacement error in terrestrial environments, differentiating error among different species or community types within the same ecosystem has received less attention. Salt marshes are dominated by monocultures of grass species and thus are an ideal environment to examine the within-species effect on lidar DEM error. We analyzed error of lidar DEMs using elevations from real-time kinematic (RTK) surveys in saltmarshes in multiple national parks and wildlife refuge areas from the mouth of the Chesapeake Bay to Massachusetts. Error of the lidar DEMs was sometimes large, on the order of 0.25 m, and varied significantly between sites because vegetation cover varies seasonally and lidar data was not always collected in the same season for each park. Vegetation cover and composition were used to explain differences between RTK elevations and lidar DEMs. This research underscores the importance of collecting RTK elevation data and vegetation cover data coincident with lidar data to produce correction factors specific to individual salt marsh sites.

  4. Evaluation of phantom-based education system for acupuncture manipulation.

    PubMed

    Lee, In-Seon; Lee, Ye-Seul; Park, Hi-Joon; Lee, Hyejung; Chae, Younbyoung

    2015-01-01

    Although acupuncture manipulation has been regarded as one of the important factors in clinical outcome, it has been difficult to train novice students to become skillful experts due to a lack of adequate educational program and tools. In the present study, we investigated whether newly developed phantom acupoint tools would be useful to practice-naïve acupuncture students for practicing the three different types of acupuncture manipulation to enhance their skills. We recruited 12 novice students and had them practice acupuncture manipulations on the phantom acupoint (5% agarose gel). We used the Acusensor 2 and compared their acupuncture manipulation techniques, for which the target criteria were depth and time factors, at acupoint LI11 in the human body before and after 10 training sessions. The outcomes were depth of needle insertion, depth error from target criterion, time of rotating, lifting, and thrusting, time error from target criteria and the time ratio. After 10 training sessions, the students showed significantly improved outcomes in depth of needle, depth error (rotation, reducing lifting/thrusting), thumb-forward time error, thumb-backward time error (rotation), and lifting time (reinforcing lifting/thrusting). The phantom acupoint tool could be useful in a phantom-based education program for acupuncture-manipulation training for students. For advanced education programs for acupuncture manipulation, we will need to collect additional information, such as patient responses, acupoint-specific anatomical characteristics, delicate tissue-like modeling, haptic and visual feedback, and data from an acupuncture practice simulator.

  5. Artificial neural network implementation of a near-ideal error prediction controller

    NASA Technical Reports Server (NTRS)

    Mcvey, Eugene S.; Taylor, Lynore Denise

    1992-01-01

    A theory has been developed at the University of Virginia which explains the effects of including an ideal predictor in the forward loop of a linear error-sampled system. It has been shown that the presence of this ideal predictor tends to stabilize the class of systems considered. A prediction controller is merely a system which anticipates a signal or part of a signal before it actually occurs. It is understood that an exact prediction controller is physically unrealizable. However, in systems where the input tends to be repetitive or limited, (i.e., not random) near ideal prediction is possible. In order for the controller to act as a stability compensator, the predictor must be designed in a way that allows it to learn the expected error response of the system. In this way, an unstable system will become stable by including the predicted error in the system transfer function. Previous and current prediction controller include pattern recognition developments and fast-time simulation which are applicable to the analysis of linear sampled data type systems. The use of pattern recognition techniques, along with a template matching scheme, has been proposed as one realizable type of near-ideal prediction. Since many, if not most, systems are repeatedly subjected to similar inputs, it was proposed that an adaptive mechanism be used to 'learn' the correct predicted error response. Once the system has learned the response of all the expected inputs, it is necessary only to recognize the type of input with a template matching mechanism and then to use the correct predicted error to drive the system. Suggested here is an alternate approach to the realization of a near-ideal error prediction controller, one designed using Neural Networks. Neural Networks are good at recognizing patterns such as system responses, and the back-propagation architecture makes use of a template matching scheme. In using this type of error prediction, it is assumed that the system error responses be known for a particular input and modeled plant. These responses are used in the error prediction controller. An analysis was done on the general dynamic behavior that results from including a digital error predictor in a control loop and these were compared to those including the near-ideal Neural Network error predictor. This analysis was done for a second and third order system.

  6. Multiplicity Control in Structural Equation Modeling

    ERIC Educational Resources Information Center

    Cribbie, Robert A.

    2007-01-01

    Researchers conducting structural equation modeling analyses rarely, if ever, control for the inflated probability of Type I errors when evaluating the statistical significance of multiple parameters in a model. In this study, the Type I error control, power and true model rates of famsilywise and false discovery rate controlling procedures were…

  7. Robustness of Type I Error and Power in Set Correlation Analysis of Contingency Tables.

    ERIC Educational Resources Information Center

    Cohen, Jacob; Nee, John C. M.

    1990-01-01

    The analysis of contingency tables via set correlation allows the assessment of subhypotheses involving contrast functions of the categories of the nominal scales. The robustness of such methods with regard to Type I error and statistical power was studied via a Monte Carlo experiment. (TJH)

  8. Integrating automated structured analysis and design with Ada programming support environments

    NASA Technical Reports Server (NTRS)

    Hecht, Alan; Simmons, Andy

    1986-01-01

    Ada Programming Support Environments (APSE) include many powerful tools that address the implementation of Ada code. These tools do not address the entire software development process. Structured analysis is a methodology that addresses the creation of complete and accurate system specifications. Structured design takes a specification and derives a plan to decompose the system subcomponents, and provides heuristics to optimize the software design to minimize errors and maintenance. It can also produce the creation of useable modules. Studies have shown that most software errors result from poor system specifications, and that these errors also become more expensive to fix as the development process continues. Structured analysis and design help to uncover error in the early stages of development. The APSE tools help to insure that the code produced is correct, and aid in finding obscure coding errors. However, they do not have the capability to detect errors in specifications or to detect poor designs. An automated system for structured analysis and design TEAMWORK, which can be integrated with an APSE to support software systems development from specification through implementation is described. These tools completement each other to help developers improve quality and productivity, as well as to reduce development and maintenance costs. Complete system documentation and reusable code also resultss from the use of these tools. Integrating an APSE with automated tools for structured analysis and design provide capabilities and advantages beyond those realized with any of these systems used by themselves.

  9. Types of errors by referees and perception of injustice by soccer players: a preliminary study.

    PubMed

    Canovas, Sophie; Reynes, Eric; Ferrand, Claude; Pantaléon, Nathalie; Long, Thierry

    2008-02-01

    This study investigated the effect of referees' errors on players' perceived injustice in soccer. The conditions investigated were Referee Decision, with three types: Correctly Called a foul vs Wrongly Called a foul vs Did not Call a foul and Repetition of the Situation, with two types: Isolated vs Repeated. Male soccer players at regional and departmental levels of practice (N = 95, M(age) = 23.2, SD = 5.1) were asked to rank six hypothetical situations according to the perceived injustice. Analysis indicated significant effects of Referee Decisions and Repetition of the Situation on the perception of injustice, but showed no differences between the types of error. However, age and years of soccer experience were associated with perception of injustice when the referee correctly called a foul.

  10. Using meta-quality to assess the utility of volunteered geographic information for science.

    PubMed

    Langley, Shaun A; Messina, Joseph P; Moore, Nathan

    2017-11-06

    Volunteered geographic information (VGI) has strong potential to be increasingly valuable to scientists in collaboration with non-scientists. The abundance of mobile phones and other wireless forms of communication open up significant opportunities for the public to get involved in scientific research. As these devices and activities become more abundant, questions of uncertainty and error in volunteer data are emerging as critical components for using volunteer-sourced spatial data. Here we present a methodology for using VGI and assessing its sensitivity to three types of error. More specifically, this study evaluates the reliability of data from volunteers based on their historical patterns. The specific context is a case study in surveillance of tsetse flies, a health concern for being the primary vector of African Trypanosomiasis. Reliability, as measured by a reputation score, determines the threshold for accepting the volunteered data for inclusion in a tsetse presence/absence model. Higher reputation scores are successful in identifying areas of higher modeled tsetse prevalence. A dynamic threshold is needed but the quality of VGI will improve as more data are collected and the errors in identifying reliable participants will decrease. This system allows for two-way communication between researchers and the public, and a way to evaluate the reliability of VGI. Boosting the public's ability to participate in such work can improve disease surveillance and promote citizen science. In the absence of active surveillance, VGI can provide valuable spatial information given that the data are reliable.

  11. Statistical methodology for estimating the mean difference in a meta-analysis without study-specific variance information.

    PubMed

    Sangnawakij, Patarawan; Böhning, Dankmar; Adams, Stephen; Stanton, Michael; Holling, Heinz

    2017-04-30

    Statistical inference for analyzing the results from several independent studies on the same quantity of interest has been investigated frequently in recent decades. Typically, any meta-analytic inference requires that the quantity of interest is available from each study together with an estimate of its variability. The current work is motivated by a meta-analysis on comparing two treatments (thoracoscopic and open) of congenital lung malformations in young children. Quantities of interest include continuous end-points such as length of operation or number of chest tube days. As studies only report mean values (and no standard errors or confidence intervals), the question arises how meta-analytic inference can be developed. We suggest two methods to estimate study-specific variances in such a meta-analysis, where only sample means and sample sizes are available in the treatment arms. A general likelihood ratio test is derived for testing equality of variances in two groups. By means of simulation studies, the bias and estimated standard error of the overall mean difference from both methodologies are evaluated and compared with two existing approaches: complete study analysis only and partial variance information. The performance of the test is evaluated in terms of type I error. Additionally, we illustrate these methods in the meta-analysis on comparing thoracoscopic and open surgery for congenital lung malformations and in a meta-analysis on the change in renal function after kidney donation. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  12. Metrological Support in Technosphere Safety

    NASA Astrophysics Data System (ADS)

    Akhobadze, G. N.

    2017-11-01

    The principle of metrological support in technosphere safety is considered. It is based on the practical metrology. The theoretical aspects of accuracy and errors of the measuring instruments intended for diagnostics and control of the technosphere under the influence of factors harmful to human beings are presented. The necessity to choose measuring devices with high metrological characteristics according to the accuracy class and contact of sensitive elements with a medium under control is shown. The types of additional errors in measuring instruments that arise when they are affected by environmental influences are described. A specific example of the analyzers application to control industrial emissions and measure the oil and particulate matter in wastewater is shown; it allows assessing advantages and disadvantages of analyzers. Besides, the recommendations regarding the missing metrological characteristics of the instruments in use are provided. The technosphere continuous monitoring taking into account the metrological principles is expected to efficiently forecast the technosphere development and make appropriate decisions.

  13. Random mutagenesis of BoNT/E Hc nanobody to construct a secondary phage-display library.

    PubMed

    Shahi, B; Mousavi Gargari, S L; Rasooli, I; Rajabi Bazl, M; Hoseinpoor, R

    2014-08-01

    To construct secondary mutant phage-display library of recombinant single variable domain (VHH) against botulinum neurotoxin E by error-prone PCR. The gene coding for specific VHH derived from the camel immunized with binding domain of botulinum neurotoxin E (BoNT/E) was amplified by error-prone PCR. Several biopanning rounds were used to screen the phage-displaying BoNT/E Hc nanobodies. The final nanobody, SHMR4, with increased affinity recognized BoNT/E toxin with no cross-reactivity with other antigens especially with related BoNT toxins. The constructed nanobody could be a suitable candidate for VHH-based biosensor production to detect the Clostridium botulinum type E. Diagnosis and treatment of botulinum neurotoxins are important. Generation of high-affinity antibodies based on the construction of secondary libraries using affinity maturation step leads to the development of reagents for precise diagnosis and therapy. © 2014 The Society for Applied Microbiology.

  14. Magnetometer-augmented IMU simulator: in-depth elaboration.

    PubMed

    Brunner, Thomas; Lauffenburger, Jean-Philippe; Changey, Sébastien; Basset, Michel

    2015-03-04

    The location of objects is a growing research topic due, for instance, to the expansion of civil drones or intelligent vehicles. This expansion was made possible through the development of microelectromechanical systems (MEMS), inexpensive and miniaturized inertial sensors. In this context, this article describes the development of a new simulator which generates sensor measurements, giving a specific input trajectory. This will allow the comparison of pose estimation algorithms. To develop this simulator, the measurement equations of every type of sensor have to be analytically determined. To achieve this objective, classical kinematic equations are used for the more common sensors, i.e., accelerometers and rate gyroscopes. As nowadays, the MEMS inertial measurement units (IMUs) are generally magnetometer-augmented, an absolute world magnetic model is implemented. After the determination of the perfect measurement (through the error-free sensor models), realistic error models are developed to simulate real IMU behavior. Finally, the developed simulator is subjected to different validation tests.

  15. Magnetometer-Augmented IMU Simulator: In-Depth Elaboration

    PubMed Central

    Brunner, Thomas; Lauffenburger, Jean-Philippe; Changey, Sébastien; Basset, Michel

    2015-01-01

    The location of objects is a growing research topic due, for instance, to the expansion of civil drones or intelligent vehicles. This expansion was made possible through the development of microelectromechanical systems (MEMS), inexpensive and miniaturized inertial sensors. In this context, this article describes the development of a new simulator which generates sensor measurements, giving a specific input trajectory. This will allow the comparison of pose estimation algorithms. To develop this simulator, the measurement equations of every type of sensor have to be analytically determined. To achieve this objective, classical kinematic equations are used for the more common sensors, i.e., accelerometers and rate gyroscopes. As nowadays, the MEMS inertial measurement units (IMUs) are generally magnetometer-augmented, an absolute world magnetic model is implemented. After the determination of the perfect measurement (through the error-free sensor models), realistic error models are developed to simulate real IMU behavior. Finally, the developed simulator is subjected to different validation tests. PMID:25746095

  16. SPS pilot signal design and power transponder analysis, volume 2, phase 3

    NASA Technical Reports Server (NTRS)

    Lindsey, W. C.; Scholtz, R. A.; Chie, C. M.

    1980-01-01

    The problem of pilot signal parameter optimization and the related problem of power transponder performance analysis for the Solar Power Satellite reference phase control system are addressed. Signal and interference models were established to enable specifications of the front end filters including both the notch filter and the antenna frequency response. A simulation program package was developed to be included in SOLARSIM to perform tradeoffs of system parameters based on minimizing the phase error for the pilot phase extraction. An analytical model that characterizes the overall power transponder operation was developed. From this model, the effects of different phase noise disturbance sources that contribute to phase variations at the output of the power transponders were studied and quantified. Results indicate that it is feasible to hold the antenna array phase error to less than one degree per power module for the type of disturbances modeled.

  17. Mixed-effects location and scale Tobit joint models for heterogeneous longitudinal data with skewness, detection limits, and measurement errors.

    PubMed

    Lu, Tao

    2017-01-01

    The joint modeling of mean and variance for longitudinal data is an active research area. This type of model has the advantage of accounting for heteroscedasticity commonly observed in between and within subject variations. Most of researches focus on improving the estimating efficiency but ignore many data features frequently encountered in practice. In this article, we develop a mixed-effects location scale joint model that concurrently accounts for longitudinal data with multiple features. Specifically, our joint model handles heterogeneity, skewness, limit of detection, measurement errors in covariates which are typically observed in the collection of longitudinal data from many studies. We employ a Bayesian approach for making inference on the joint model. The proposed model and method are applied to an AIDS study. Simulation studies are performed to assess the performance of the proposed method. Alternative models under different conditions are compared.

  18. Development of adaptive observation strategy using retrospective optimal interpolation

    NASA Astrophysics Data System (ADS)

    Noh, N.; Kim, S.; Song, H.; Lim, G.

    2011-12-01

    Retrospective optimal interpolation (ROI) is a method that is used to minimize cost functions with multiple minima without using adjoint models. Song and Lim (2011) perform the experiments to reduce the computational costs for implementing ROI by transforming the control variables into eigenvectors of background error covariance. We adapt the ROI algorithm to compute sensitivity estimates of severe weather events over the Korean peninsula. The eigenvectors of the ROI algorithm is modified every time the observations are assimilated. This implies that the modified eigenvectors shows the error distribution of control variables which are updated by assimilating observations. So, We can estimate the effects of the specific observations. In order to verify the adaptive observation strategy, High-impact weather over the Korean peninsula is simulated and interpreted using WRF modeling system and sensitive regions for each high-impact weather is calculated. The effects of assimilation for each observation type is discussed.

  19. A goodness-of-fit test for capture-recapture model M(t) under closure

    USGS Publications Warehouse

    Stanley, T.R.; Burnham, K.P.

    1999-01-01

    A new, fully efficient goodness-of-fit test for the time-specific closed-population capture-recapture model M(t) is presented. This test is based on the residual distribution of the capture history data given the maximum likelihood parameter estimates under model M(t), is partitioned into informative components, and is based on chi-square statistics. Comparison of this test with Leslie's test (Leslie, 1958, Journal of Animal Ecology 27, 84- 86) for model M(t), using Monte Carlo simulations, shows the new test generally outperforms Leslie's test. The new test is frequently computable when Leslie's test is not, has Type I error rates that are closer to nominal error rates than Leslie's test, and is sensitive to behavioral variation and heterogeneity in capture probabilities. Leslie's test is not sensitive to behavioral variation in capture probabilities but, when computable, has greater power to detect heterogeneity than the new test.

  20. Obtaining correct compile results by absorbing mismatches between data types representations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horie, Michihiro; Horii, Hiroshi H.; Kawachiya, Kiyokuni

    Methods and a system are provided. A method includes implementing a function, which a compiler for a first language does not have, using a compiler for a second language. The implementing step includes generating, by the compiler for the first language, a first abstract syntax tree. The implementing step further includes converting, by a converter, the first abstract syntax tree to a second abstract syntax tree of the compiler for the second language using a conversion table from data representation types in the first language to data representation types in the second language. When a compilation error occurs, the implementingmore » step also includes generating a special node for error processing in the second abstract syntax tree and storing an error token in the special node. When unparsing, the implementing step additionally includes outputting the error token, in the form of source code written in the first language.« less

  1. Obtaining correct compile results by absorbing mismatches between data types representations

    DOEpatents

    Horie, Michihiro; Horii, Hiroshi H.; Kawachiya, Kiyokuni; Takeuchi, Mikio

    2017-03-21

    Methods and a system are provided. A method includes implementing a function, which a compiler for a first language does not have, using a compiler for a second language. The implementing step includes generating, by the compiler for the first language, a first abstract syntax tree. The implementing step further includes converting, by a converter, the first abstract syntax tree to a second abstract syntax tree of the compiler for the second language using a conversion table from data representation types in the first language to data representation types in the second language. When a compilation error occurs, the implementing step also includes generating a special node for error processing in the second abstract syntax tree and storing an error token in the special node. When unparsing, the implementing step additionally includes outputting the error token, in the form of source code written in the first language.

  2. Obtaining correct compile results by absorbing mismatches between data types representations

    DOEpatents

    Horie, Michihiro; Horii, Hiroshi H.; Kawachiya, Kiyokuni; Takeuchi, Mikio

    2017-11-21

    Methods and a system are provided. A method includes implementing a function, which a compiler for a first language does not have, using a compiler for a second language. The implementing step includes generating, by the compiler for the first language, a first abstract syntax tree. The implementing step further includes converting, by a converter, the first abstract syntax tree to a second abstract syntax tree of the compiler for the second language using a conversion table from data representation types in the first language to data representation types in the second language. When a compilation error occurs, the implementing step also includes generating a special node for error processing in the second abstract syntax tree and storing an error token in the special node. When unparsing, the implementing step additionally includes outputting the error token, in the form of source code written in the first language.

  3. Small sample mediation testing: misplaced confidence in bootstrapped confidence intervals.

    PubMed

    Koopman, Joel; Howe, Michael; Hollenbeck, John R; Sin, Hock-Peng

    2015-01-01

    Bootstrapping is an analytical tool commonly used in psychology to test the statistical significance of the indirect effect in mediation models. Bootstrapping proponents have particularly advocated for its use for samples of 20-80 cases. This advocacy has been heeded, especially in the Journal of Applied Psychology, as researchers are increasingly utilizing bootstrapping to test mediation with samples in this range. We discuss reasons to be concerned with this escalation, and in a simulation study focused specifically on this range of sample sizes, we demonstrate not only that bootstrapping has insufficient statistical power to provide a rigorous hypothesis test in most conditions but also that bootstrapping has a tendency to exhibit an inflated Type I error rate. We then extend our simulations to investigate an alternative empirical resampling method as well as a Bayesian approach and demonstrate that they exhibit comparable statistical power to bootstrapping in small samples without the associated inflated Type I error. Implications for researchers testing mediation hypotheses in small samples are presented. For researchers wishing to use these methods in their own research, we have provided R syntax in the online supplemental materials. (c) 2015 APA, all rights reserved.

  4. A Probabilistic Palimpsest Model of Visual Short-term Memory

    PubMed Central

    Matthey, Loic; Bays, Paul M.; Dayan, Peter

    2015-01-01

    Working memory plays a key role in cognition, and yet its mechanisms remain much debated. Human performance on memory tasks is severely limited; however, the two major classes of theory explaining the limits leave open questions about key issues such as how multiple simultaneously-represented items can be distinguished. We propose a palimpsest model, with the occurrent activity of a single population of neurons coding for several multi-featured items. Using a probabilistic approach to storage and recall, we show how this model can account for many qualitative aspects of existing experimental data. In our account, the underlying nature of a memory item depends entirely on the characteristics of the population representation, and we provide analytical and numerical insights into critical issues such as multiplicity and binding. We consider representations in which information about individual feature values is partially separate from the information about binding that creates single items out of multiple features. An appropriate balance between these two types of information is required to capture fully the different types of error seen in human experimental data. Our model provides the first principled account of misbinding errors. We also suggest a specific set of stimuli designed to elucidate the representations that subjects actually employ. PMID:25611204

  5. A probabilistic palimpsest model of visual short-term memory.

    PubMed

    Matthey, Loic; Bays, Paul M; Dayan, Peter

    2015-01-01

    Working memory plays a key role in cognition, and yet its mechanisms remain much debated. Human performance on memory tasks is severely limited; however, the two major classes of theory explaining the limits leave open questions about key issues such as how multiple simultaneously-represented items can be distinguished. We propose a palimpsest model, with the occurrent activity of a single population of neurons coding for several multi-featured items. Using a probabilistic approach to storage and recall, we show how this model can account for many qualitative aspects of existing experimental data. In our account, the underlying nature of a memory item depends entirely on the characteristics of the population representation, and we provide analytical and numerical insights into critical issues such as multiplicity and binding. We consider representations in which information about individual feature values is partially separate from the information about binding that creates single items out of multiple features. An appropriate balance between these two types of information is required to capture fully the different types of error seen in human experimental data. Our model provides the first principled account of misbinding errors. We also suggest a specific set of stimuli designed to elucidate the representations that subjects actually employ.

  6. Rapid Crop Cover Mapping for the Conterminous United States.

    PubMed

    Dahal, Devendra; Wylie, Bruce; Howard, Danny

    2018-06-05

    Timely crop cover maps with sufficient resolution are important components to various environmental planning and research applications. Through the modification and use of a previously developed crop classification model (CCM), which was originally developed to generate historical annual crop cover maps, we hypothesized that such crop cover maps could be generated rapidly during the growing season. Through a process of incrementally removing weekly and monthly independent variables from the CCM and implementing a 'two model mapping' approach, we found it viable to generate conterminous United States-wide rapid crop cover maps at a resolution of 250 m for the current year by the month of September. In this approach, we divided the CCM model into one 'crop type model' to handle the classification of nine specific crops and a second, binary model to classify the presence or absence of 'other' crops. Under the two model mapping approach, the training errors were 0.8% and 1.5% for the crop type and binary model, respectively, while test errors were 5.5% and 6.4%, respectively. With spatial mapping accuracies for annual maps reaching upwards of 70%, this approach demonstrated a strong potential for generating rapid crop cover maps by the 1 st of September.

  7. Medication Administration Errors in an Adult Emergency Department of a Tertiary Health Care Facility in Ghana.

    PubMed

    Acheampong, Franklin; Tetteh, Ashalley Raymond; Anto, Berko Panyin

    2016-12-01

    This study determined the incidence, types, clinical significance, and potential causes of medication administration errors (MAEs) at the emergency department (ED) of a tertiary health care facility in Ghana. This study used a cross-sectional nonparticipant observational technique. Study participants (nurses) were observed preparing and administering medication at the ED of a 2000-bed tertiary care hospital in Accra, Ghana. The observations were then compared with patients' medication charts, and identified errors were clarified with staff for possible causes. Of the 1332 observations made, involving 338 patients and 49 nurses, 362 had errors, representing 27.2%. However, the error rate excluding "lack of drug availability" fell to 12.8%. Without wrong time error, the error rate was 22.8%. The 2 most frequent error types were omission (n = 281, 77.6%) and wrong time (n = 58, 16%) errors. Omission error was mainly due to unavailability of medicine, 48.9% (n = 177). Although only one of the errors was potentially fatal, 26.7% were definitely clinically severe. The common themes that dominated the probable causes of MAEs were unavailability, staff factors, patient factors, prescription, and communication problems. This study gives credence to similar studies in different settings that MAEs occur frequently in the ED of hospitals. Most of the errors identified were not potentially fatal; however, preventive strategies need to be used to make life-saving processes such as drug administration in such specialized units error-free.

  8. A Learner Corpus-Based Study on Verb Errors of Turkish EFL Learners

    ERIC Educational Resources Information Center

    Can, Cem

    2017-01-01

    As learner corpora have presently become readily accessible, it is practicable to examine interlanguage errors and carry out error analysis (EA) on learner-generated texts. The data available in a learner corpus enable researchers to investigate authentic learner errors and their respective frequencies in terms of types and tokens as well as…

  9. An Analysis of Spanish and German Learners' Errors. Working Papers on Bilingualism, No. 7.

    ERIC Educational Resources Information Center

    LoCoco, Veronica Gonzalez-Mena

    This study analyzes Spanish and German errors committed by adult native speakers of English enrolled in elementary and intermediate levels. Four written samples were collected for each target language, over a period of five months. Errors were categorized according to their possible source. Types of errors were ordered according to their…

  10. Performance Errors in Weight Training and Their Correction.

    ERIC Educational Resources Information Center

    Downing, John H.; Lander, Jeffrey E.

    2002-01-01

    Addresses general performance errors in weight training, also discussing each category of error separately. The paper focuses on frequency and intensity, incorrect training velocities, full range of motion, and symmetrical training. It also examines specific errors related to the bench press, squat, military press, and bent- over and seated row…

  11. Deductive Error Diagnosis and Inductive Error Generalization for Intelligent Tutoring Systems.

    ERIC Educational Resources Information Center

    Hoppe, H. Ulrich

    1994-01-01

    Examines the deductive approach to error diagnosis for intelligent tutoring systems. Topics covered include the principles of the deductive approach to diagnosis; domain-specific heuristics to solve the problem of generalizing error patterns; and deductive diagnosis and the hypertext-based learning environment. (Contains 26 references.) (JLB)

  12. Error-Eliciting Problems: Fostering Understanding and Thinking

    ERIC Educational Resources Information Center

    Lim, Kien H.

    2014-01-01

    Student errors are springboards for analyzing, reasoning, and justifying. The mathematics education community recognizes the value of student errors, noting that "mistakes are seen not as dead ends but rather as potential avenues for learning." To induce specific errors and help students learn, choose tasks that might produce mistakes.…

  13. Incidence of speech recognition errors in the emergency department.

    PubMed

    Goss, Foster R; Zhou, Li; Weiner, Scott G

    2016-09-01

    Physician use of computerized speech recognition (SR) technology has risen in recent years due to its ease of use and efficiency at the point of care. However, error rates between 10 and 23% have been observed, raising concern about the number of errors being entered into the permanent medical record, their impact on quality of care and medical liability that may arise. Our aim was to determine the incidence and types of SR errors introduced by this technology in the emergency department (ED). Level 1 emergency department with 42,000 visits/year in a tertiary academic teaching hospital. A random sample of 100 notes dictated by attending emergency physicians (EPs) using SR software was collected from the ED electronic health record between January and June 2012. Two board-certified EPs annotated the notes and conducted error analysis independently. An existing classification schema was adopted to classify errors into eight errors types. Critical errors deemed to potentially impact patient care were identified. There were 128 errors in total or 1.3 errors per note, and 14.8% (n=19) errors were judged to be critical. 71% of notes contained errors, and 15% contained one or more critical errors. Annunciation errors were the highest at 53.9% (n=69), followed by deletions at 18.0% (n=23) and added words at 11.7% (n=15). Nonsense errors, homonyms and spelling errors were present in 10.9% (n=14), 4.7% (n=6), and 0.8% (n=1) of notes, respectively. There were no suffix or dictionary errors. Inter-annotator agreement was 97.8%. This is the first estimate at classifying speech recognition errors in dictated emergency department notes. Speech recognition errors occur commonly with annunciation errors being the most frequent. Error rates were comparable if not lower than previous studies. 15% of errors were deemed critical, potentially leading to miscommunication that could affect patient care. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  14. Medication errors with electronic prescribing (eP): Two views of the same picture

    PubMed Central

    2010-01-01

    Background Quantitative prospective methods are widely used to evaluate the impact of new technologies such as electronic prescribing (eP) on medication errors. However, they are labour-intensive and it is not always feasible to obtain pre-intervention data. Our objective was to compare the eP medication error picture obtained with retrospective quantitative and qualitative methods. Methods The study was carried out at one English district general hospital approximately two years after implementation of an integrated electronic prescribing, administration and records system. Quantitative: A structured retrospective analysis was carried out of clinical records and medication orders for 75 randomly selected patients admitted to three wards (medicine, surgery and paediatrics) six months after eP implementation. Qualitative: Eight doctors, 6 nurses, 8 pharmacy staff and 4 other staff at senior, middle and junior grades, and 19 adult patients on acute surgical and medical wards were interviewed. Staff interviews explored experiences of developing and working with the system; patient interviews focused on experiences of medicine prescribing and administration on the ward. Interview transcripts were searched systematically for accounts of medication incidents. A classification scheme was developed and applied to the errors identified in the records review. Results The two approaches produced similar pictures of the drug use process. Interviews identified types of error identified in the retrospective notes review plus two eP-specific errors which were not detected by record review. Interview data took less time to collect than record review, and provided rich data on the prescribing process, and reasons for delays or non-administration of medicines, including "once only" orders and "as required" medicines. Conclusions The qualitative approach provided more understanding of processes, and some insights into why medication errors can happen. The method is cost-effective and could be used to supplement information from anonymous error reporting schemes. PMID:20497532

  15. Determining Type I and Type II Errors when Applying Information Theoretic Change Detection Metrics for Data Association and Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Wilkins, M.; Moyer, E. J.; Hussein, Islam I.; Schumacher, P. W., Jr.

    Correlating new detections back to a large catalog of resident space objects (RSOs) requires solving one of three types of data association problems: observation-to-track, track-to-track, or observation-to-observation. The authors previous work has explored the use of various information divergence metrics for solving these problems: Kullback-Leibler (KL) divergence, mutual information, and Bhattacharrya distance. In addition to approaching the data association problem strictly from the metric tracking aspect, we have explored fusing metric and photometric data using Bayesian probabilistic reasoning for RSO identification to aid in our ability to correlate data to specific RS Os. In this work, we will focus our attention on the KL Divergence, which is a measure of the information gained when new evidence causes the observer to revise their beliefs. We can apply the Principle of Minimum Discrimination Information such that new data produces as small an information gain as possible and this information change is bounded by ɛ. Choosing an appropriate value for ɛ for both convergence and change detection is a function of your risk tolerance. Small ɛ for change detection increases alarm rates while larger ɛ for convergence means that new evidence need not be identical in information content. We need to understand what this change detection metric implies for Type I α and Type II β errors when we are forced to make a decision on whether new evidence represents a true change in characterization of an object or is merely within the bounds of our measurement uncertainty. This is unclear for the case of fusing multiple kinds and qualities of characterization evidence that may exist in different metric spaces or are even semantic statements. To this end, we explore the use of Sequential Probability Ratio Testing where we suppose that we may need to collect additional evidence before accepting or rejecting the null hypothesis that a change has occurred. In this work, we will explore the effects of choosing ɛ as a function of α and β. Our intent is that this work will help bridge understanding between the well-trodden grounds of Type I and Type II errors and changes in information theoretic content.

  16. Effects of preparation time and trial type probability on performance of anti- and pro-saccades.

    PubMed

    Pierce, Jordan E; McDowell, Jennifer E

    2016-02-01

    Cognitive control optimizes responses to relevant task conditions by balancing bottom-up stimulus processing with top-down goal pursuit. It can be investigated using the ocular motor system by contrasting basic prosaccades (look toward a stimulus) with complex antisaccades (look away from a stimulus). Furthermore, the amount of time allotted between trials, the need to switch task sets, and the time allowed to prepare for an upcoming saccade all impact performance. In this study the relative probabilities of anti- and pro-saccades were manipulated across five blocks of interleaved trials, while the inter-trial interval and trial type cue duration were varied across subjects. Results indicated that inter-trial interval had no significant effect on error rates or reaction times (RTs), while a shorter trial type cue led to more antisaccade errors and faster overall RTs. Responses following a shorter cue duration also showed a stronger effect of trial type probability, with more antisaccade errors in blocks with a low antisaccade probability and slower RTs for each saccade task when its trial type was unlikely. A longer cue duration yielded fewer errors and slower RTs, with a larger switch cost for errors compared to a short cue duration. Findings demonstrated that when the trial type cue duration was shorter, visual motor responsiveness was faster and subjects relied upon the implicit trial probability context to improve performance. When the cue duration was longer, increased fixation-related activity may have delayed saccade motor preparation and slowed responses, guiding subjects to respond in a controlled manner regardless of trial type probability. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Exploring effective multiplicity in multichannel functional near-infrared spectroscopy using eigenvalues of correlation matrices

    PubMed Central

    Uga, Minako; Dan, Ippeita; Dan, Haruka; Kyutoku, Yasushi; Taguchi, Y-h; Watanabe, Eiju

    2015-01-01

    Abstract. Recent advances in multichannel functional near-infrared spectroscopy (fNIRS) allow wide coverage of cortical areas while entailing the necessity to control family-wise errors (FWEs) due to increased multiplicity. Conventionally, the Bonferroni method has been used to control FWE. While Type I errors (false positives) can be strictly controlled, the application of a large number of channel settings may inflate the chance of Type II errors (false negatives). The Bonferroni-based methods are especially stringent in controlling Type I errors of the most activated channel with the smallest p value. To maintain a balance between Types I and II errors, effective multiplicity (Meff) derived from the eigenvalues of correlation matrices is a method that has been introduced in genetic studies. Thus, we explored its feasibility in multichannel fNIRS studies. Applying the Meff method to three kinds of experimental data with different activation profiles, we performed resampling simulations and found that Meff was controlled at 10 to 15 in a 44-channel setting. Consequently, the number of significantly activated channels remained almost constant regardless of the number of measured channels. We demonstrated that the Meff approach can be an effective alternative to Bonferroni-based methods for multichannel fNIRS studies. PMID:26157982

  18. The effect of timing errors in optical digital systems.

    NASA Technical Reports Server (NTRS)

    Gagliardi, R. M.

    1972-01-01

    The use of digital transmission with narrow light pulses appears attractive for data communications, but carries with it a stringent requirement on system bit timing. The effects of imperfect timing in direct-detection (noncoherent) optical binary systems are investigated using both pulse-position modulation and on-off keying for bit transmission. Particular emphasis is placed on specification of timing accuracy and an examination of system degradation when this accuracy is not attained. Bit error probabilities are shown as a function of timing errors from which average error probabilities can be computed for specific synchronization methods. Of significance is the presence of a residual or irreducible error probability in both systems, due entirely to the timing system, which cannot be overcome by the data channel.

  19. Type-II generalized family-wise error rate formulas with application to sample size determination.

    PubMed

    Delorme, Phillipe; de Micheaux, Pierre Lafaye; Liquet, Benoit; Riou, Jérémie

    2016-07-20

    Multiple endpoints are increasingly used in clinical trials. The significance of some of these clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. The usual approach in multiple hypothesis testing is to control the family-wise error rate, which is defined as the probability that at least one type-I error is made. More recently, the q-generalized family-wise error rate has been introduced to control the probability of making at least q false rejections. For procedures controlling this global type-I error rate, we define a type-II r-generalized family-wise error rate, which is directly related to the r-power defined as the probability of rejecting at least r false null hypotheses. We obtain very general power formulas that can be used to compute the sample size for single-step and step-wise procedures. These are implemented in our R package rPowerSampleSize available on the CRAN, making them directly available to end users. Complexities of the formulas are presented to gain insight into computation time issues. Comparison with Monte Carlo strategy is also presented. We compute sample sizes for two clinical trials involving multiple endpoints: one designed to investigate the effectiveness of a drug against acute heart failure and the other for the immunogenicity of a vaccine strategy against pneumococcus. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  20. Evaluating causes of error in landmark-based data collection using scanners

    PubMed Central

    Shearer, Brian M.; Cooke, Siobhán B.; Halenar, Lauren B.; Reber, Samantha L.; Plummer, Jeannette E.; Delson, Eric

    2017-01-01

    In this study, we assess the precision, accuracy, and repeatability of craniodental landmarks (Types I, II, and III, plus curves of semilandmarks) on a single macaque cranium digitally reconstructed with three different surface scanners and a microCT scanner. Nine researchers with varying degrees of osteological and geometric morphometric knowledge landmarked ten iterations of each scan (40 total) to test the effects of scan quality, researcher experience, and landmark type on levels of intra- and interobserver error. Two researchers additionally landmarked ten specimens from seven different macaque species using the same landmark protocol to test the effects of the previously listed variables relative to species-level morphological differences (i.e., observer variance versus real biological variance). Error rates within and among researchers by scan type were calculated to determine whether or not data collected by different individuals or on different digitally rendered crania are consistent enough to be used in a single dataset. Results indicate that scan type does not impact rate of intra- or interobserver error. Interobserver error is far greater than intraobserver error among all individuals, and is similar in variance to that found among different macaque species. Additionally, experience with osteology and morphometrics both positively contribute to precision in multiple landmarking sessions, even where less experienced researchers have been trained in point acquisition. Individual training increases precision (although not necessarily accuracy), and is highly recommended in any situation where multiple researchers will be collecting data for a single project. PMID:29099867

  1. 75 FR 48743 - Mandatory Reporting of Greenhouse Gases

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-11

    ...EPA is proposing to amend specific provisions in the GHG reporting rule to clarify certain provisions, to correct technical and editorial errors, and to address certain questions and issues that have arisen since promulgation. These proposed changes include providing additional information and clarity on existing requirements, allowing greater flexibility or simplified calculation methods for certain sources in a facility, amending data reporting requirements to provide additional clarity on when different types of GHG emissions need to be calculated and reported, clarifying terms and definitions in certain equations, and technical corrections.

  2. 75 FR 79091 - Mandatory Reporting of Greenhouse Gases

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-17

    ...EPA is amending specific provisions in the greenhouse gas reporting rule to clarify certain provisions, to correct technical and editorial errors, and to address certain questions and issues that have arisen since promulgation. These final changes include generally providing additional information and clarity on existing requirements, allowing greater flexibility or simplified calculation methods for certain sources, amending data reporting requirements to provide additional clarity on when different types of greenhouse gas emissions need to be calculated and reported, clarifying terms and definitions in certain equations and other technical corrections and amendments.

  3. [Medical errors from positions of mutual relations of patient-lawyer-doctor].

    PubMed

    Radysh, Ia F; Tsema, Ie V; Mehed', V P

    2013-01-01

    The basic theoretical and practical aspects of problem of malpractice in the system of health protection Ukraine are presented in the article. On specific examples the essence of the term "malpractice" is expounded. It was considered types of malpractice, conditions of beginning and kinds of responsibility to assumption of malpractice. The special attention to the legal and mental and ethical questions of problem from positions of protection of rights for a patient and medical worker is spared. The necessity of qualification malpractices on intentional and unintentional, possible and impermissible is grounded.

  4. Rain measurements from space using a modified Seasat-type radar altimeter

    NASA Technical Reports Server (NTRS)

    Goldhirsh, J.; Walsh, E. J.

    1982-01-01

    The incorporation in the 13.5 GHz Seasat-type radar altimeter of a mode to measure rain rate is investigated. Specifically, an algorithm is developed relating the echo power at the various range bins, to the rain rate taking into consideration Mie scattering and path attenuation. The dependence of the algorithm on rain drop size distribution and nonuniform rain structure are examined and associated uncertainties defined. A technique for obtaining drop size distribution through the measurements of power at the top of the raincell and power difference through the cell also is investigated together with an associated error analysis. A description of the minor hardware modifications to the basic Seasat design is given for implementing the rain measurements.

  5. Error identification and recovery by student nurses using human patient simulation: opportunity to improve patient safety.

    PubMed

    Henneman, Elizabeth A; Roche, Joan P; Fisher, Donald L; Cunningham, Helene; Reilly, Cheryl A; Nathanson, Brian H; Henneman, Philip L

    2010-02-01

    This study examined types of errors that occurred or were recovered in a simulated environment by student nurses. Errors occurred in all four rule-based error categories, and all students committed at least one error. The most frequent errors occurred in the verification category. Another common error was related to physician interactions. The least common errors were related to coordinating information with the patient and family. Our finding that 100% of student subjects committed rule-based errors is cause for concern. To decrease errors and improve safe clinical practice, nurse educators must identify effective strategies that students can use to improve patient surveillance. Copyright 2010 Elsevier Inc. All rights reserved.

  6. Effect of satellite formations and imaging modes on global albedo estimation

    NASA Astrophysics Data System (ADS)

    Nag, Sreeja; Gatebe, Charles K.; Miller, David W.; de Weck, Olivier L.

    2016-05-01

    We confirm the applicability of using small satellite formation flight for multi-angular earth observation to retrieve global, narrow band, narrow field-of-view albedo. The value of formation flight is assessed using a coupled systems engineering and science evaluation model, driven by Model Based Systems Engineering and Observing System Simulation Experiments. Albedo errors are calculated against bi-directional reflectance data obtained from NASA airborne campaigns made by the Cloud Absorption Radiometer for the seven major surface types, binned using MODIS' land cover map - water, forest, cropland, grassland, snow, desert and cities. A full tradespace of architectures with three to eight satellites, maintainable orbits and imaging modes (collective payload pointing strategies) are assessed. For an arbitrary 4-sat formation, changing the reference, nadir-pointing satellite dynamically reduces the average albedo error to 0.003, from 0.006 found in the static referencecase. Tracking pre-selected waypoints with all the satellites reduces the average error further to 0.001, allows better polar imaging and continued operations even with a broken formation. An albedo error of 0.001 translates to 1.36 W/m2 or 0.4% in Earth's outgoing radiation error. Estimation errors are found to be independent of the satellites' altitude and inclination, if the nadir-looking is changed dynamically. The formation satellites are restricted to differ in only right ascension of planes and mean anomalies within slotted bounds. Three satellites in some specific formations show average albedo errors of less than 2% with respect to airborne, ground data and seven satellites in any slotted formation outperform the monolithic error of 3.6%. In fact, the maximum possible albedo error, purely based on angular sampling, of 12% for monoliths is outperformed by a five-satellite formation in any slotted arrangement and an eight satellite formation can bring that error down four fold to 3%. More than 70% ground spot overlap between the satellites is possible with 0.5° of pointing accuracy, 2 Km of GPS accuracy and commands uplinked once a day. The formations can be maintained at less than 1 m/s of monthly ΔV per satellite.

  7. Application of human reliability analysis to nursing errors in hospitals.

    PubMed

    Inoue, Kayoko; Koizumi, Akio

    2004-12-01

    Adverse events in hospitals, such as in surgery, anesthesia, radiology, intensive care, internal medicine, and pharmacy, are of worldwide concern and it is important, therefore, to learn from such incidents. There are currently no appropriate tools based on state-of-the art models available for the analysis of large bodies of medical incident reports. In this study, a new model was developed to facilitate medical error analysis in combination with quantitative risk assessment. This model enables detection of the organizational factors that underlie medical errors, and the expedition of decision making in terms of necessary action. Furthermore, it determines medical tasks as module practices and uses a unique coding system to describe incidents. This coding system has seven vectors for error classification: patient category, working shift, module practice, linkage chain (error type, direct threat, and indirect threat), medication, severity, and potential hazard. Such mathematical formulation permitted us to derive two parameters: error rates for module practices and weights for the aforementioned seven elements. The error rate of each module practice was calculated by dividing the annual number of incident reports of each module practice by the annual number of the corresponding module practice. The weight of a given element was calculated by the summation of incident report error rates for an element of interest. This model was applied specifically to nursing practices in six hospitals over a year; 5,339 incident reports with a total of 63,294,144 module practices conducted were analyzed. Quality assurance (QA) of our model was introduced by checking the records of quantities of practices and reproducibility of analysis of medical incident reports. For both items, QA guaranteed legitimacy of our model. Error rates for all module practices were approximately of the order 10(-4) in all hospitals. Three major organizational factors were found to underlie medical errors: "violation of rules" with a weight of 826 x 10(-4), "failure of labor management" with a weight of 661 x 10(-4), and "defects in the standardization of nursing practices" with a weight of 495 x 10(-4).

  8. Human-robot cooperative movement training: learning a novel sensory motor transformation during walking with robotic assistance-as-needed.

    PubMed

    Emken, Jeremy L; Benitez, Raul; Reinkensmeyer, David J

    2007-03-28

    A prevailing paradigm of physical rehabilitation following neurologic injury is to "assist-as-needed" in completing desired movements. Several research groups are attempting to automate this principle with robotic movement training devices and patient cooperative algorithms that encourage voluntary participation. These attempts are currently not based on computational models of motor learning. Here we assume that motor recovery from a neurologic injury can be modelled as a process of learning a novel sensory motor transformation, which allows us to study a simplified experimental protocol amenable to mathematical description. Specifically, we use a robotic force field paradigm to impose a virtual impairment on the left leg of unimpaired subjects walking on a treadmill. We then derive an "assist-as-needed" robotic training algorithm to help subjects overcome the virtual impairment and walk normally. The problem is posed as an optimization of performance error and robotic assistance. The optimal robotic movement trainer becomes an error-based controller with a forgetting factor that bounds kinematic errors while systematically reducing its assistance when those errors are small. As humans have a natural range of movement variability, we introduce an error weighting function that causes the robotic trainer to disregard this variability. We experimentally validated the controller with ten unimpaired subjects by demonstrating how it helped the subjects learn the novel sensory motor transformation necessary to counteract the virtual impairment, while also preventing them from experiencing large kinematic errors. The addition of the error weighting function allowed the robot assistance to fade to zero even though the subjects' movements were variable. We also show that in order to assist-as-needed, the robot must relax its assistance at a rate faster than that of the learning human. The assist-as-needed algorithm proposed here can limit error during the learning of a dynamic motor task. The algorithm encourages learning by decreasing its assistance as a function of the ongoing progression of movement error. This type of algorithm is well suited for helping people learn dynamic tasks for which large kinematic errors are dangerous or discouraging, and thus may prove useful for robot-assisted movement training of walking or reaching following neurologic injury.

  9. Optimal wavefront control for adaptive segmented mirrors

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Goodman, Joseph W.

    1989-01-01

    A ground-based astronomical telescope with a segmented primary mirror will suffer image-degrading wavefront aberrations from at least two sources: (1) atmospheric turbulence and (2) segment misalignment or figure errors of the mirror itself. This paper describes the derivation of a mirror control feedback matrix that assumes the presence of both types of aberration and is optimum in the sense that it minimizes the mean-squared residual wavefront error. Assumptions of the statistical nature of the wavefront measurement errors, atmospheric phase aberrations, and segment misalignment errors are made in the process of derivation. Examples of the degree of correlation are presented for three different types of wavefront measurement data and compared to results of simple corrections.

  10. Misrepresentations of evolutionary psychology in sex and gender textbooks.

    PubMed

    Winegard, Benjamin M; Winegard, Bo M; Deaner, Robert O

    2014-05-20

    Evolutionary psychology has provoked controversy, especially when applied to human sex differences. We hypothesize that this is partly due to misunderstandings of evolutionary psychology that are perpetuated by undergraduate sex and gender textbooks. As an initial test of this hypothesis, we develop a catalog of eight types of errors and document their occurrence in 15 widely used sex and gender textbooks. Consistent with our hypothesis, of the 12 textbooks that discussed evolutionary psychology, all contained at least one error, and the median number of errors was five. The most common types of errors were "Straw Man," "Biological Determinism," and "Species Selection." We conclude by suggesting improvements to undergraduate sex and gender textbooks.

  11. Errors in imaging of traumatic injuries.

    PubMed

    Scaglione, Mariano; Iaselli, Francesco; Sica, Giacomo; Feragalli, Beatrice; Nicola, Refky

    2015-10-01

    The advent of multi-detector computed tomography (MDCT) has drastically improved the outcomes of patients with multiple traumatic injuries. However, there are still diagnostic challenges to be considered. A missed or the delay of a diagnosis in trauma patients can sometimes be related to perception or other non-visual cues, while other errors are due to poor technique or poor image quality. In order to avoid any serious complications, it is important for the practicing radiologist to be cognizant of some of the most common types of errors. The objective of this article is to review the various types of errors in the evaluation of patients with multiple trauma injuries or polytrauma with MDCT.

  12. Ultrasonographic Fetal Weight Estimation: Should Macrosomia-Specific Formulas Be Utilized?

    PubMed

    Porter, Blake; Neely, Cherry; Szychowski, Jeff; Owen, John

    2015-08-01

    This study aims to derive an estimated fetal weight (EFW) formula in macrosomic fetuses, compare its accuracy to the 1986 Hadlock IV formula, and assess whether including maternal diabetes (MDM) improves estimation. Retrospective review of nonanomalous live-born singletons with birth weight (BWT) ≥ 4 kg and biometry within 14 days of birth. Formula accuracy included: (1) mean error (ME = EFW - BWT), (2) absolute mean error (AME = absolute value of [1]), and (3) mean percent error (MPE, [1]/BWT × 100%). Using loge BWT as the dependent variable, multivariable linear regression produced a macrosomic-specific formula in a "training" dataset which was verified by "validation" data. Formulas specific for MDM were also developed. Out of the 403 pregnancies, birth gestational age was 39.5 ± 1.4 weeks, and median BWT was 4,240 g. The macrosomic formula from the training data (n = 201) had associated ME = 54 ± 284 g, AME = 234 ± 167 g, and MPE = 1.6 ± 6.2%; evaluation in the validation dataset (n = 202) showed similar errors. The Hadlock formula had associated ME = -369 ± 422 g, AME = 451 ± 332 g, MPE = -8.3 ± 9.3% (all p < 0.0001). Diabetes-specific formula errors were similar to the macrosomic formula errors (all p = NS). With BWT ≥ 4 kg, the macrosomic formula was significantly more accurate than Hadlock IV, which systematically underestimates fetal/BWT. Diabetes-specific formulas did not improve accuracy. A specific formula should be considered when macrosomia is suspected. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  13. A cross-linguistic data bank for oral picture naming in Dutch, English, German, French, Italian, Russian, Spanish, and Swedish (PEDOI).

    PubMed

    Kremin, Helgard; Akhutina, Tanya; Basso, Anna; Davidoff, Jules; De Wilde, Martine; Kitzing, Peter; Lorenz, Antje; Perrier, Danièle; van der Sandt-Koenderman, Mieke; Vendrell, Josep; Weniger, Dorothea; Apt, Pia; Arabia, Catherine; De Bleser, Ria; Cohen, Henri; Corbineau, Mathilde; Dolivet, Marie-Christine; Hirsh, Kathi; Lehoux, Emilie; Metz-Lutz, Mari Noëlle; Montañes, Patricia; Plagne, Stéphanie; Polonskaya, Natalya; Sirois, Mélanie; Stachowiak, Franz; Sweeney, Trione; Vish-Brink, Evy

    2003-11-01

    The well established effect of word frequency on adult's picture naming performance is now called into question. This is particularly true for variables which are correlated with frequency, as is the case of age of word acquisition. Since the work of [Carrol and White, 1973] there is growing agreement among researchers to confer an important role in lexical access to this variable. Indeed, it has been shown ( [Hodgson and Ellis, 1998]) that for normal English-speaking adults only the variables 'age-of-acquisition' and 'name agreement' are independent predictors of naming success among the various variables considered. However, when brain-damaged subjects with and without degenerative pathologies are studied, word frequency and word length as well as concept familiarity all give significant effects ( [Hirsh and Funnell, 1995]; [Lambon Ralph et al., 1998]; [Nickels and Howard, 1995]). Finally, it has been suggested that the production of specific error types may be related to such variables. According to [Nickels and Howard, 1994] the production of semantic errors is specifically affected by 'imageability' and in the recent study by [Kremin et al., 2001] 'age of acquisition' predicts (frank) word finding difficulties.

  14. Detecting and overcoming systematic bias in high-throughput screening technologies: a comprehensive review of practical issues and methodological solutions.

    PubMed

    Caraus, Iurie; Alsuwailem, Abdulaziz A; Nadon, Robert; Makarenkov, Vladimir

    2015-11-01

    Significant efforts have been made recently to improve data throughput and data quality in screening technologies related to drug design. The modern pharmaceutical industry relies heavily on high-throughput screening (HTS) and high-content screening (HCS) technologies, which include small molecule, complementary DNA (cDNA) and RNA interference (RNAi) types of screening. Data generated by these screening technologies are subject to several environmental and procedural systematic biases, which introduce errors into the hit identification process. We first review systematic biases typical of HTS and HCS screens. We highlight that study design issues and the way in which data are generated are crucial for providing unbiased screening results. Considering various data sets, including the publicly available ChemBank data, we assess the rates of systematic bias in experimental HTS by using plate-specific and assay-specific error detection tests. We describe main data normalization and correction techniques and introduce a general data preprocessing protocol. This protocol can be recommended for academic and industrial researchers involved in the analysis of current or next-generation HTS data. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  15. Effects of plant phenology and vertical height on accuracy of radio-telemetry locations

    USGS Publications Warehouse

    Grovenburg, Troy W.; Jacques, Christopher N.; Klaver, Robert W.; DePerno, Christopher S.; Lehman, Chad P.; Brinkman, Todd J.; Robling, Kevin A.; Rupp, Susan P.; Jenks, Jonathan A.

    2013-01-01

    The use of very high frequency (VHF) radio-telemetry remains wide-spread in studies of wildlife ecology and management. However, few studies have evaluated the influence of vegetative obstruction on accuracy in differing habitats with varying transmitter types and heights. Using adult and fawn collars at varying heights above the ground (0, 33, 66 and 100 cm) to simulate activities (bedded, feeding and standing) and ages (neonate, juvenile and adult) of deer Odocoileus spp., we collected 5,767 bearings and estimated 1,424 locations (28-30 for each of 48 subsamples) in three habitat types (pasture, grassland and forest), during two stages of vegetative growth (spring and late summer). Bearing error was approximately twice as large at a distance of 900 m for fawn (9.9°) than for adult deer collars (4.9°). Of 12 models developed to explain the variation in location error, the analysis of covariance model (HT*D + C*D + HT*TBA + C*TBA) containing interactions of height of collar above ground (HT), collar type (C), vertical height of understory vegetation (D) and tree basal area (TBA) was the best model (wi = 0.92) and explained ∼ 71% of the variation in location error. Location error was greater for both collar types at 0 and 33 cm above the ground compared to 66 and 100 cm above the ground; however, location error was less for adult than fawn collars. Vegetation metrics influenced location error, which increased with greater vertical height of understory vegetation and tree basal area. Further, interaction of vegetation metrics and categorical variables indicated significant effects on location error. Our results indicate that researchers need to consider study objectives, life history of the study animal, signal strength of collar (collar type), distance from transmitter to receiver, topographical changes in elevation, habitat composition and season when designing telemetry protocols. Bearing distances in forested habitat should be decreased (approximately 23% in our study) compared to bearing distances in open habitat to maintain a consistent bearing error across habitats. Additionally, we believe that field biologists monitoring neonate ungulates for habitat selection should rely on visual locations rather than using VHF-collars and triangulation.

  16. Ultrasound fusion image error correction using subject-specific liver motion model and automatic image registration.

    PubMed

    Yang, Minglei; Ding, Hui; Zhu, Lei; Wang, Guangzhi

    2016-12-01

    Ultrasound fusion imaging is an emerging tool and benefits a variety of clinical applications, such as image-guided diagnosis and treatment of hepatocellular carcinoma and unresectable liver metastases. However, respiratory liver motion-induced misalignment of multimodal images (i.e., fusion error) compromises the effectiveness and practicability of this method. The purpose of this paper is to develop a subject-specific liver motion model and automatic registration-based method to correct the fusion error. An online-built subject-specific motion model and automatic image registration method for 2D ultrasound-3D magnetic resonance (MR) images were combined to compensate for the respiratory liver motion. The key steps included: 1) Build a subject-specific liver motion model for current subject online and perform the initial registration of pre-acquired 3D MR and intra-operative ultrasound images; 2) During fusion imaging, compensate for liver motion first using the motion model, and then using an automatic registration method to further correct the respiratory fusion error. Evaluation experiments were conducted on liver phantom and five subjects. In the phantom study, the fusion error (superior-inferior axis) was reduced from 13.90±2.38mm to 4.26±0.78mm by using the motion model only. The fusion error further decreased to 0.63±0.53mm by using the registration method. The registration method also decreased the rotation error from 7.06±0.21° to 1.18±0.66°. In the clinical study, the fusion error was reduced from 12.90±9.58mm to 6.12±2.90mm by using the motion model alone. Moreover, the fusion error decreased to 1.96±0.33mm by using the registration method. The proposed method can effectively correct the respiration-induced fusion error to improve the fusion image quality. This method can also reduce the error correction dependency on the initial registration of ultrasound and MR images. Overall, the proposed method can improve the clinical practicability of ultrasound fusion imaging. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. The Accuracy of GBM GRB Localizations

    NASA Astrophysics Data System (ADS)

    Briggs, Michael Stephen; Connaughton, V.; Meegan, C.; Hurley, K.

    2010-03-01

    We report an study of the accuracy of GBM GRB localizations, analyzing three types of localizations: those produced automatically by the GBM Flight Software on board GBM, those produced automatically with ground software in near real time, and localizations produced with human guidance. The two types of automatic locations are distributed in near real-time via GCN Notices; the human-guided locations are distributed on timescale of many minutes or hours using GCN Circulars. This work uses a Bayesian analysis that models the distribution of the GBM total location error by comparing GBM locations to more accurate locations obtained with other instruments. Reference locations are obtained from Swift, Super-AGILE, the LAT, and with the IPN. We model the GBM total location errors as having systematic errors in addition to the statistical errors and use the Bayesian analysis to constrain the systematic errors.

  18. The Effectiveness of Chinese NNESTs in Teaching English Syntax

    ERIC Educational Resources Information Center

    Chou, Chun-Hui; Bartz, Kevin

    2007-01-01

    This paper evaluates the effect of Chinese non-native English-speaking teachers (NNESTs) on Chinese ESL students' struggles with English syntax. The paper first classifies Chinese learners' syntactic errors into 10 common types. It demonstrates how each type of error results from an internal attempt to translate a common Chinese construction into…

  19. Breaches of health information: are electronic records different from paper records?

    PubMed

    Sade, Robert M

    2010-01-01

    Breaches of electronic medical records constitute a type of healthcare error, but should be considered separately from other types of errors because the national focus on the security of electronic data justifies special treatment of medical information breaches. Guidelines for protecting electronic medical records should be applied equally to paper medical records.

  20. Scheduling periodic jobs that allow imprecise results

    NASA Technical Reports Server (NTRS)

    Chung, Jen-Yao; Liu, Jane W. S.; Lin, Kwei-Jay

    1990-01-01

    The problem of scheduling periodic jobs in hard real-time systems that support imprecise computations is discussed. Two workload models of imprecise computations are presented. These models differ from traditional models in that a task may be terminated any time after it has produced an acceptable result. Each task is logically decomposed into a mandatory part followed by an optional part. In a feasible schedule, the mandatory part of every task is completed before the deadline of the task. The optional part refines the result produced by the mandatory part to reduce the error in the result. Applications are classified as type N and type C, according to undesirable effects of errors. The two workload models characterize the two types of applications. The optional parts of the tasks in an N job need not ever be completed. The resulting quality of each type-N job is measured in terms of the average error in the results over several consecutive periods. A class of preemptive, priority-driven algorithms that leads to feasible schedules with small average error is described and evaluated.

  1. How does aging affect the types of error made in a visual short-term memory ‘object-recall’ task?

    PubMed Central

    Sapkota, Raju P.; van der Linde, Ian; Pardhan, Shahina

    2015-01-01

    This study examines how normal aging affects the occurrence of different types of incorrect responses in a visual short-term memory (VSTM) object-recall task. Seventeen young (Mean = 23.3 years, SD = 3.76), and 17 normally aging older (Mean = 66.5 years, SD = 6.30) adults participated. Memory stimuli comprised two or four real world objects (the memory load) presented sequentially, each for 650 ms, at random locations on a computer screen. After a 1000 ms retention interval, a test display was presented, comprising an empty box at one of the previously presented two or four memory stimulus locations. Participants were asked to report the name of the object presented at the cued location. Errors rates wherein participants reported the names of objects that had been presented in the memory display but not at the cued location (non-target errors) vs. objects that had not been presented at all in the memory display (non-memory errors) were compared. Significant effects of aging, memory load and target recency on error type and absolute error rates were found. Non-target error rate was higher than non-memory error rate in both age groups, indicating that VSTM may have been more often than not populated with partial traces of previously presented items. At high memory load, non-memory error rate was higher in young participants (compared to older participants) when the memory target had been presented at the earliest temporal position. However, non-target error rates exhibited a reversed trend, i.e., greater error rates were found in older participants when the memory target had been presented at the two most recent temporal positions. Data are interpreted in terms of proactive interference (earlier examined non-target items interfering with more recent items), false memories (non-memory items which have a categorical relationship to presented items, interfering with memory targets), slot and flexible resource models, and spatial coding deficits. PMID:25653615

  2. How does aging affect the types of error made in a visual short-term memory 'object-recall' task?

    PubMed

    Sapkota, Raju P; van der Linde, Ian; Pardhan, Shahina

    2014-01-01

    This study examines how normal aging affects the occurrence of different types of incorrect responses in a visual short-term memory (VSTM) object-recall task. Seventeen young (Mean = 23.3 years, SD = 3.76), and 17 normally aging older (Mean = 66.5 years, SD = 6.30) adults participated. Memory stimuli comprised two or four real world objects (the memory load) presented sequentially, each for 650 ms, at random locations on a computer screen. After a 1000 ms retention interval, a test display was presented, comprising an empty box at one of the previously presented two or four memory stimulus locations. Participants were asked to report the name of the object presented at the cued location. Errors rates wherein participants reported the names of objects that had been presented in the memory display but not at the cued location (non-target errors) vs. objects that had not been presented at all in the memory display (non-memory errors) were compared. Significant effects of aging, memory load and target recency on error type and absolute error rates were found. Non-target error rate was higher than non-memory error rate in both age groups, indicating that VSTM may have been more often than not populated with partial traces of previously presented items. At high memory load, non-memory error rate was higher in young participants (compared to older participants) when the memory target had been presented at the earliest temporal position. However, non-target error rates exhibited a reversed trend, i.e., greater error rates were found in older participants when the memory target had been presented at the two most recent temporal positions. Data are interpreted in terms of proactive interference (earlier examined non-target items interfering with more recent items), false memories (non-memory items which have a categorical relationship to presented items, interfering with memory targets), slot and flexible resource models, and spatial coding deficits.

  3. Different grades MEMS accelerometers error characteristics

    NASA Astrophysics Data System (ADS)

    Pachwicewicz, M.; Weremczuk, J.

    2017-08-01

    The paper presents calibration effects of two different MEMS accelerometers of different price and quality grades and discusses different accelerometers errors types. The calibration for error determining is provided by reference centrifugal measurements. The design and measurement errors of the centrifuge are discussed as well. It is shown that error characteristics of the sensors are very different and it is not possible to use simple calibration methods presented in the literature in both cases.

  4. Winnowing DNA for rare sequences: highly specific sequence and methylation based enrichment.

    PubMed

    Thompson, Jason D; Shibahara, Gosuke; Rajan, Sweta; Pel, Joel; Marziali, Andre

    2012-01-01

    Rare mutations in cell populations are known to be hallmarks of many diseases and cancers. Similarly, differential DNA methylation patterns arise in rare cell populations with diagnostic potential such as fetal cells circulating in maternal blood. Unfortunately, the frequency of alleles with diagnostic potential, relative to wild-type background sequence, is often well below the frequency of errors in currently available methods for sequence analysis, including very high throughput DNA sequencing. We demonstrate a DNA preparation and purification method that through non-linear electrophoretic separation in media containing oligonucleotide probes, achieves 10,000 fold enrichment of target DNA with single nucleotide specificity, and 100 fold enrichment of unmodified methylated DNA differing from the background by the methylation of a single cytosine residue.

  5. The use of self checks and voting in software error detection - An empirical study

    NASA Technical Reports Server (NTRS)

    Leveson, Nancy G.; Cha, Stephen S.; Knight, John C.; Shimeall, Timothy J.

    1990-01-01

    The results of an empirical study of software error detection using self checks and N-version voting are presented. Working independently, each of 24 programmers first prepared a set of self checks using just the requirements specification of an aerospace application, and then each added self checks to an existing implementation of that specification. The modified programs were executed to measure the error-detection performance of the checks and to compare this with error detection using simple voting among multiple versions. The analysis of the checks revealed that there are great differences in the ability of individual programmers to design effective checks. It was found that some checks that might have been effective failed to detect an error because they were badly placed, and there were numerous instances of checks signaling nonexistent errors. In general, specification-based checks alone were not as effective as specification-based checks combined with code-based checks. Self checks made it possible to identify faults that had not been detected previously by voting 28 versions of the program over a million randomly generated inputs. This appeared to result from the fact that the self checks could examine the internal state of the executing program, whereas voting examines only final results of computations. If internal states had to be identical in N-version voting systems, then there would be no reason to write multiple versions.

  6. Effect of Box-Cox transformation on power of Haseman-Elston and maximum-likelihood variance components tests to detect quantitative trait Loci.

    PubMed

    Etzel, C J; Shete, S; Beasley, T M; Fernandez, J R; Allison, D B; Amos, C I

    2003-01-01

    Non-normality of the phenotypic distribution can affect power to detect quantitative trait loci in sib pair studies. Previously, we observed that Winsorizing the sib pair phenotypes increased the power of quantitative trait locus (QTL) detection for both Haseman-Elston (HE) least-squares tests [Hum Hered 2002;53:59-67] and maximum likelihood-based variance components (MLVC) analysis [Behav Genet (in press)]. Winsorizing the phenotypes led to a slight increase in type 1 error in H-E tests and a slight decrease in type I error for MLVC analysis. Herein, we considered transforming the sib pair phenotypes using the Box-Cox family of transformations. Data were simulated for normal and non-normal (skewed and kurtic) distributions. Phenotypic values were replaced by Box-Cox transformed values. Twenty thousand replications were performed for three H-E tests of linkage and the likelihood ratio test (LRT), the Wald test and other robust versions based on the MLVC method. We calculated the relative nominal inflation rate as the ratio of observed empirical type 1 error divided by the set alpha level (5, 1 and 0.1% alpha levels). MLVC tests applied to non-normal data had inflated type I errors (rate ratio greater than 1.0), which were controlled best by Box-Cox transformation and to a lesser degree by Winsorizing. For example, for non-transformed, skewed phenotypes (derived from a chi2 distribution with 2 degrees of freedom), the rates of empirical type 1 error with respect to set alpha level=0.01 were 0.80, 4.35 and 7.33 for the original H-E test, LRT and Wald test, respectively. For the same alpha level=0.01, these rates were 1.12, 3.095 and 4.088 after Winsorizing and 0.723, 1.195 and 1.905 after Box-Cox transformation. Winsorizing reduced inflated error rates for the leptokurtic distribution (derived from a Laplace distribution with mean 0 and variance 8). Further, power (adjusted for empirical type 1 error) at the 0.01 alpha level ranged from 4.7 to 17.3% across all tests using the non-transformed, skewed phenotypes, from 7.5 to 20.1% after Winsorizing and from 12.6 to 33.2% after Box-Cox transformation. Likewise, power (adjusted for empirical type 1 error) using leptokurtic phenotypes at the 0.01 alpha level ranged from 4.4 to 12.5% across all tests with no transformation, from 7 to 19.2% after Winsorizing and from 4.5 to 13.8% after Box-Cox transformation. Thus the Box-Cox transformation apparently provided the best type 1 error control and maximal power among the procedures we considered for analyzing a non-normal, skewed distribution (chi2) while Winzorizing worked best for the non-normal, kurtic distribution (Laplace). We repeated the same simulations using a larger sample size (200 sib pairs) and found similar results. Copyright 2003 S. Karger AG, Basel

  7. Surprised at All the Entropy: Hippocampal, Caudate and Midbrain Contributions to Learning from Prediction Errors

    PubMed Central

    Schiffer, Anne-Marike; Ahlheim, Christiane; Wurm, Moritz F.; Schubotz, Ricarda I.

    2012-01-01

    Influential concepts in neuroscientific research cast the brain a predictive machine that revises its predictions when they are violated by sensory input. This relates to the predictive coding account of perception, but also to learning. Learning from prediction errors has been suggested for take place in the hippocampal memory system as well as in the basal ganglia. The present fMRI study used an action-observation paradigm to investigate the contributions of the hippocampus, caudate nucleus and midbrain dopaminergic system to different types of learning: learning in the absence of prediction errors, learning from prediction errors, and responding to the accumulation of prediction errors in unpredictable stimulus configurations. We conducted analyses of the regions of interests' BOLD response towards these different types of learning, implementing a bootstrapping procedure to correct for false positives. We found both, caudate nucleus and the hippocampus to be activated by perceptual prediction errors. The hippocampal responses seemed to relate to the associative mismatch between a stored representation and current sensory input. Moreover, its response was significantly influenced by the average information, or Shannon entropy of the stimulus material. In accordance with earlier results, the habenula was activated by perceptual prediction errors. Lastly, we found that the substantia nigra was activated by the novelty of sensory input. In sum, we established that the midbrain dopaminergic system, the hippocampus, and the caudate nucleus were to different degrees significantly involved in the three different types of learning: acquisition of new information, learning from prediction errors and responding to unpredictable stimulus developments. We relate learning from perceptual prediction errors to the concept of predictive coding and related information theoretic accounts. PMID:22570715

  8. Judging Statistical Models of Individual Decision Making under Risk Using In- and Out-of-Sample Criteria

    PubMed Central

    Drichoutis, Andreas C.; Lusk, Jayson L.

    2014-01-01

    Despite the fact that conceptual models of individual decision making under risk are deterministic, attempts to econometrically estimate risk preferences require some assumption about the stochastic nature of choice. Unfortunately, the consequences of making different assumptions are, at present, unclear. In this paper, we compare three popular error specifications (Fechner, contextual utility, and Luce error) for three different preference functionals (expected utility, rank-dependent utility, and a mixture of those two) using in- and out-of-sample selection criteria. We find drastically different inferences about structural risk preferences across the competing functionals and error specifications. Expected utility theory is least affected by the selection of the error specification. A mixture model combining the two conceptual models assuming contextual utility provides the best fit of the data both in- and out-of-sample. PMID:25029467

  9. Judging statistical models of individual decision making under risk using in- and out-of-sample criteria.

    PubMed

    Drichoutis, Andreas C; Lusk, Jayson L

    2014-01-01

    Despite the fact that conceptual models of individual decision making under risk are deterministic, attempts to econometrically estimate risk preferences require some assumption about the stochastic nature of choice. Unfortunately, the consequences of making different assumptions are, at present, unclear. In this paper, we compare three popular error specifications (Fechner, contextual utility, and Luce error) for three different preference functionals (expected utility, rank-dependent utility, and a mixture of those two) using in- and out-of-sample selection criteria. We find drastically different inferences about structural risk preferences across the competing functionals and error specifications. Expected utility theory is least affected by the selection of the error specification. A mixture model combining the two conceptual models assuming contextual utility provides the best fit of the data both in- and out-of-sample.

  10. Frequency and types of the medication errors in an academic emergency department in Iran: The emergent need for clinical pharmacy services in emergency departments.

    PubMed

    Zeraatchi, Alireza; Talebian, Mohammad-Taghi; Nejati, Amir; Dashti-Khavidaki, Simin

    2013-07-01

    Emergency departments (EDs) are characterized by simultaneous care of multiple patients with various medical conditions. Due to a large number of patients with complex diseases, speed and complexity of medication use, working in under-staffing and crowded environment, medication errors are commonly perpetrated by emergency care providers. This study was designed to evaluate the incidence of medication errors among patients attending to an ED in a teaching hospital in Iran. In this cross-sectional study, a total of 500 patients attending to ED were randomly assessed for incidence and types of medication errors. Some factors related to medication errors such as working shift, weekdays and schedule of the educational program of trainee were also evaluated. Nearly, 22% of patients experienced at least one medication error. The rate of medication errors were 0.41 errors per patient and 0.16 errors per ordered medication. The frequency of medication errors was higher in men, middle age patients, first weekdays, night-time work schedules and the first semester of educational year of new junior emergency medicine residents. More than 60% of errors were prescription errors by physicians and the remaining were transcription or administration errors by nurses. More than 35% of the prescribing errors happened during the selection of drug dose and frequency. The most common medication errors by nurses during the administration were omission error (16.2%) followed by unauthorized drug (6.4%). Most of the medication errors happened for anticoagulants and thrombolytics (41.2%) followed by antimicrobial agents (37.7%) and insulin (7.4%). In this study, at least one-fifth of the patients attending to ED experienced medication errors resulting from multiple factors. More common prescription errors happened during ordering drug dose and frequency. More common administration errors included dug omission or unauthorized drug.

  11. Cognitive decision errors and organization vulnerabilities in nuclear power plant safety management: Modeling using the TOGA meta-theory framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cappelli, M.; Gadomski, A. M.; Sepiellis, M.

    In the field of nuclear power plant (NPP) safety modeling, the perception of the role of socio-cognitive engineering (SCE) is continuously increasing. Today, the focus is especially on the identification of human and organization decisional errors caused by operators and managers under high-risk conditions, as evident by analyzing reports on nuclear incidents occurred in the past. At present, the engineering and social safety requirements need to enlarge their domain of interest in such a way to include all possible losses generating events that could be the consequences of an abnormal state of a NPP. Socio-cognitive modeling of Integrated Nuclear Safetymore » Management (INSM) using the TOGA meta-theory has been discussed during the ICCAP 2011 Conference. In this paper, more detailed aspects of the cognitive decision-making and its possible human errors and organizational vulnerability are presented. The formal TOGA-based network model for cognitive decision-making enables to indicate and analyze nodes and arcs in which plant operators and managers errors may appear. The TOGA's multi-level IPK (Information, Preferences, Knowledge) model of abstract intelligent agents (AIAs) is applied. In the NPP context, super-safety approach is also discussed, by taking under consideration unexpected events and managing them from a systemic perspective. As the nature of human errors depends on the specific properties of the decision-maker and the decisional context of operation, a classification of decision-making using IPK is suggested. Several types of initial situations of decision-making useful for the diagnosis of NPP operators and managers errors are considered. The developed models can be used as a basis for applications to NPP educational or engineering simulators to be used for training the NPP executive staff. (authors)« less

  12. Comparative Analysis of Daytime Fire Detection Algorithms, Using AVHRR Data for the 1995 Fire Season in Canda: Perspective for MODIS

    NASA Technical Reports Server (NTRS)

    Ichoku, Charles; Kaufman, Y. J.; Fraser, R. H.; Jin, J.-Z.; Park, W. M.; Lau, William K. M. (Technical Monitor)

    2001-01-01

    Two fixed-threshold Canada Centre for Remote Sensing and European Space Agency (CCRS and ESA) and three contextual GIGLIO, International Geosphere and Biosphere Project, and Moderate Resolution Imaging Spectroradiometer (GIGLIO, IGBP, and MODIS) algorithms were used for fire detection with Advanced Very High Resolution Radiometer (AVHRR) data acquired over Canada during the 1995 fire season. The CCRS algorithm was developed for the boreal ecosystem, while the other four are for global application. The MODIS algorithm, although developed specifically for use with the MODIS sensor data, was applied to AVHRR in this study for comparative purposes. Fire detection accuracy assessment for the algorithms was based on comparisons with available 1995 burned area ground survey maps covering five Canadian provinces. Overall accuracy estimations in terms of omission (CCRS=46%, ESA=81%, GIGLIO=75%, IGBP=51%, MODIS=81%) and commission (CCRS=0.35%, ESA=0.08%, GIGLIO=0.56%, IGBP=0.75%, MODIS=0.08%) errors over forested areas revealed large differences in performance between the algorithms, with no relevance to type (fixed-threshold or contextual). CCRS performed best in detecting real forest fires, with the least omission error, while ESA and MODIS produced the highest omission error, probably because of their relatively high threshold values designed for global application. The commission error values appear small because the area of pixels falsely identified by each algorithm was expressed as a ratio of the vast unburned forest area. More detailed study shows that most commission errors in all the algorithms were incurred in nonforest agricultural areas, especially on days with very high surface temperatures. The advantage of the high thresholds in ESA and MODIS was that they incurred the least commission errors.

  13. Temporal specificity of reward prediction errors signaled by putative dopamine neurons in rat VTA depends on ventral striatum

    PubMed Central

    Takahashi, Yuji K.; Langdon, Angela J.; Niv, Yael; Schoenbaum, Geoffrey

    2016-01-01

    Summary Dopamine neurons signal reward prediction errors. This requires accurate reward predictions. It has been suggested that the ventral striatum provides these predictions. Here we tested this hypothesis by recording from putative dopamine neurons in the VTA of rats performing a task in which prediction errors were induced by shifting reward timing or number. In controls, the neurons exhibited error signals in response to both manipulations. However, dopamine neurons in rats with ipsilateral ventral striatal lesions exhibited errors only to changes in number and failed to respond to changes in timing of reward. These results, supported by computational modeling, indicate that predictions about the temporal specificity and the number of expected rewards are dissociable, and that dopaminergic prediction-error signals rely on the ventral striatum for the former but not the latter. PMID:27292535

  14. Proton upsets in LSI memories in space

    NASA Technical Reports Server (NTRS)

    Mcnulty, P. J.; Wyatt, R. C.; Filz, R. C.; Rothwell, P. L.; Farrell, G. E.

    1980-01-01

    Two types of large scale integrated dynamic random access memory devices were tested and found to be subject to soft errors when exposed to protons incident at energies between 18 and 130 MeV. These errors are shown to differ significantly from those induced in the same devices by alphas from an Am-241 source. There is considerable variation among devices in their sensitivity to proton-induced soft errors, even among devices of the same type. For protons incident at 130 MeV, the soft error cross sections measured in these experiments varied from 10 to the -8th to 10 to the -6th sq cm/proton. For individual devices, however, the soft error cross section consistently increased with beam energy from 18-130 MeV. Analysis indicates that the soft errors induced by energetic protons result from spallation interactions between the incident protons and the nuclei of the atoms comprising the device. Because energetic protons are the most numerous of both the galactic and solar cosmic rays and form the inner radiation belt, proton-induced soft errors have potentially serious implications for many electronic systems flown in space.

  15. Assessment of the relative merits of a few methods to detect evolutionary trends.

    PubMed

    Laurin, Michel

    2010-12-01

    Some of the most basic questions about the history of life concern evolutionary trends. These include determining whether or not metazoans have become more complex over time, whether or not body size tends to increase over time (the Cope-Depéret rule), or whether or not brain size has increased over time in various taxa, such as mammals and birds. Despite the proliferation of studies on such topics, assessment of the reliability of results in this field is hampered by the variability of techniques used and the lack of statistical validation of these methods. To solve this problem, simulations are performed using a variety of evolutionary models (gradual Brownian motion, speciational Brownian motion, and Ornstein-Uhlenbeck), with or without a drift of variable amplitude, with variable variance of tips, and with bounds placed close or far from the starting values and final means of simulated characters. These are used to assess the relative merits (power, Type I error rate, bias, and mean absolute value of error on slope estimate) of several statistical methods that have recently been used to assess the presence of evolutionary trends in comparative data. Results show widely divergent performance of the methods. The simple, nonphylogenetic regression (SR) and variance partitioning using phylogenetic eigenvector regression (PVR) with a broken stick selection procedure have greatly inflated Type I error rate (0.123-0.180 at a 0.05 threshold), which invalidates their use in this context. However, they have the greatest power. Most variants of Felsenstein's independent contrasts (FIC; five of which are presented) have adequate Type I error rate, although two have a slightly inflated Type I error rate with at least one of the two reference trees (0.064-0.090 error rate at a 0.05 threshold). The power of all contrast-based methods is always much lower than that of SR and PVR, except under Brownian motion with a strong trend and distant bounds. Mean absolute value of error on slope of all FIC methods is slightly higher than that of phylogenetic generalized least squares (PGLS), SR, and PVR. PGLS performs well, with low Type I error rate, low error on regression coefficient, and power comparable with some FIC methods. Four variants of skewness analysis are examined, and a new method to assess significance of results is presented. However, all have consistently low power, except in rare combinations of trees, trend strength, and distance between final means and bounds. Globally, the results clearly show that FIC-based methods and PGLS are globally better than nonphylogenetic methods and variance partitioning with PVR. FIC methods and PGLS are sensitive to the model of evolution (and, hence, to branch length errors). Our results suggest that regressing raw character contrasts against raw geological age contrasts yields a good combination of power and Type I error rate. New software to facilitate batch analysis is presented.

  16. Effects of Head Rotation on Space- and Word-Based Reading Errors in Spatial Neglect

    ERIC Educational Resources Information Center

    Reinhart, Stefan; Keller, Ingo; Kerkhoff, Georg

    2010-01-01

    Patients with right hemisphere lesions often omit or misread words on the left side of a text or the beginning letters of single words which is termed neglect dyslexia (ND). Two types of reading errors are typically observed in ND: omissions and word-based reading errors. The prior are considered as space-based omission errors on the…

  17. Some Deep Structure Manifestations in Second Language Errors of English Voiced and Voiceless "th."

    ERIC Educational Resources Information Center

    Moustafa, Margaret Heiss

    Native speakers of Egyptian Arabic make errors in their pronunciation of English that cannot always be accounted for by a contrastive analysis of Egyptian analysis of Egyptain Arabic and English. This study focuses on three types of errors in the pronunciation of voiced and voiceless "th" made by fluent speakers of English. These errors were noted…

  18. Anatomic, Clinical, and Neuropsychological Correlates of Spelling Errors in Primary Progressive Aphasia

    ERIC Educational Resources Information Center

    Shim, HyungSub; Hurley, Robert S.; Rogalski, Emily; Mesulam, M.-Marsel

    2012-01-01

    This study evaluates spelling errors in the three subtypes of primary progressive aphasia (PPA): agrammatic (PPA-G), logopenic (PPA-L), and semantic (PPA-S). Forty-one PPA patients and 36 age-matched healthy controls were administered a test of spelling. The total number of errors and types of errors in spelling to dictation of regular words,…

  19. Sensitivity and specificity of two different automated external defibrillators.

    PubMed

    Israelsson, Johan; Wangenheim, Burkard von; Årestedt, Kristofer; Semark, Birgitta; Schildmeijer, Kristina; Carlsson, Jörg

    2017-11-01

    The aim was to investigate the clinical performance of two different types of automated external defibrillators (AEDs). Three investigators reviewed 2938 rhythm analyses performed by AEDs in 240 consecutive patients (median age 72, q1-q3=62-83) who had suffered cardiac arrest between January 2011 and March 2015. Two different AEDs were used (AED A n=105, AED B n=135) in-hospital (n=91) and out-of-hospital (n=149). Among 194 shockable rhythms, 17 (8.8%) were not recognized by AED A, while AED B recognized 100% (n=135) of shockable episodes (sensitivity 91.2 vs 100%, p<0.01). In AED A, 8 (47.1%) of these episodes were judged to be algorithm errors while 9 (52.9%) were caused by external artifacts. Among 1039 non-shockable rhythms, AED A recommended shock in 11 (1.0%), while AED B recommended shock in 63 (4.1%) of 1523 episodes (specificity 98.9 vs 95.9, p<0.001). In AED A, 2 (18.2%) of these episodes were judged to be algorithm errors (AED B, n=40, 63.5%), while 9 (81.8%) were caused by external artifacts (AED B, n=23, 36.5%). There were significant differences in sensitivity and specificity between the two different AEDs. A higher sensitivity of AED B was associated with a lower specificity while a higher specificity of AED A was associated with a lower sensitivity. AED manufacturers should work to improve the algorithms. In addition, AED use should always be reviewed with a routine for giving feedback, and medical personnel should be aware of the specific strengths and shortcomings of the device they are using. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Are vowel errors influenced by consonantal context in the speech of persons with aphasia?

    NASA Astrophysics Data System (ADS)

    Gelfer, Carole E.; Bell-Berti, Fredericka; Boyle, Mary

    2004-05-01

    The literature suggests that vowels and consonants may be affected differently in the speech of persons with conduction aphasia (CA) or nonfluent aphasia with apraxia of speech (AOS). Persons with CA have shown similar error rates across vowels and consonants, while those with AOS have shown more errors for consonants than vowels. These data have been interpreted to suggest that consonants have greater gestural complexity than vowels. However, recent research [M. Boyle et al., Proc. International Cong. Phon. Sci., 3265-3268 (2003)] does not support this interpretation: persons with AOS and CA both had a high proportion of vowel errors, and vowel errors almost always occurred in the context of consonantal errors. To examine the notion that vowels are inherently less complex than consonants and are differentially affected in different types of aphasia, vowel production in different consonantal contexts for speakers with AOS or CA was examined. The target utterances, produced in carrier phrases, were bVC and bV syllables, allowing us to examine whether vowel production is influenced by consonantal context. Listener judgments were obtained for each token, and error productions were grouped according to the intended utterance and error type. Acoustical measurements were made from spectrographic displays.

Top