The next organizational challenge: finding and addressing diagnostic error.
Graber, Mark L; Trowbridge, Robert; Myers, Jennifer S; Umscheid, Craig A; Strull, William; Kanter, Michael H
2014-03-01
Although health care organizations (HCOs) are intensely focused on improving the safety of health care, efforts to date have almost exclusively targeted treatment-related issues. The literature confirms that the approaches HCOs use to identify adverse medical events are not effective in finding diagnostic errors, so the initial challenge is to identify cases of diagnostic error. WHY HEALTH CARE ORGANIZATIONS NEED TO GET INVOLVED: HCOs are preoccupied with many quality- and safety-related operational and clinical issues, including performance measures. The case for paying attention to diagnostic errors, however, is based on the following four points: (1) diagnostic errors are common and harmful, (2) high-quality health care requires high-quality diagnosis, (3) diagnostic errors are costly, and (4) HCOs are well positioned to lead the way in reducing diagnostic error. FINDING DIAGNOSTIC ERRORS: Current approaches to identifying diagnostic errors, such as occurrence screens, incident reports, autopsy, and peer review, were not designed to detect diagnostic issues (or problems of omission in general) and/or rely on voluntary reporting. The realization that the existing tools are inadequate has spurred efforts to identify novel tools that could be used to discover diagnostic errors or breakdowns in the diagnostic process that are associated with errors. New approaches--Maine Medical Center's case-finding of diagnostic errors by facilitating direct reports from physicians and Kaiser Permanente's electronic health record--based reports that detect process breakdowns in the followup of abnormal findings--are described in case studies. By raising awareness and implementing targeted programs that address diagnostic error, HCOs may begin to play an important role in addressing the problem of diagnostic error.
[Diagnostic Errors in Medicine].
Buser, Claudia; Bankova, Andriyana
2015-12-09
The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors.
A national physician survey of diagnostic error in paediatrics.
Perrem, Lucy M; Fanshawe, Thomas R; Sharif, Farhana; Plüddemann, Annette; O'Neill, Michael B
2016-10-01
This cross-sectional survey explored paediatric physician perspectives regarding diagnostic errors. All paediatric consultants and specialist registrars in Ireland were invited to participate in this anonymous online survey. The response rate for the study was 54 % (n = 127). Respondents had a median of 9-year clinical experience (interquartile range (IQR) 4-20 years). A diagnostic error was reported at least monthly by 19 (15.0 %) respondents. Consultants reported significantly less diagnostic errors compared to trainees (p value = 0.01). Cognitive error was the top-ranked contributing factor to diagnostic error, with incomplete history and examination considered to be the principal cognitive error. Seeking a second opinion and close follow-up of patients to ensure that the diagnosis is correct were the highest-ranked, clinician-based solutions to diagnostic error. Inadequate staffing levels and excessive workload were the most highly ranked system-related and situational factors. Increased access to and availability of consultants and experts was the most highly ranked system-based solution to diagnostic error. We found a low level of self-perceived diagnostic error in an experienced group of paediatricians, at variance with the literature and warranting further clarification. The results identify perceptions on the major cognitive, system-related and situational factors contributing to diagnostic error and also key preventative strategies. • Diagnostic errors are an important source of preventable patient harm and have an estimated incidence of 10-15 %. • They are multifactorial in origin and include cognitive, system-related and situational factors. What is New: • We identified a low rate of self-perceived diagnostic error in contrast to the existing literature. • Incomplete history and examination, inadequate staffing levels and excessive workload are cited as the principal contributing factors to diagnostic error in this study.
Naik, Aanand Dinkar; Rao, Raghuram; Petersen, Laura Ann
2008-01-01
Diagnostic errors are poorly understood despite being a frequent cause of medical errors. Recent efforts have aimed to advance the "basic science" of diagnostic error prevention by tracing errors to their most basic origins. Although a refined theory of diagnostic error prevention will take years to formulate, we focus on communication breakdown, a major contributor to diagnostic errors and an increasingly recognized preventable factor in medical mishaps. We describe a comprehensive framework that integrates the potential sources of communication breakdowns within the diagnostic process and identifies vulnerable steps in the diagnostic process where various types of communication breakdowns can precipitate error. We then discuss potential information technology-based interventions that may have efficacy in preventing one or more forms of these breakdowns. These possible intervention strategies include using new technologies to enhance communication between health providers and health systems, improve patient involvement, and facilitate management of information in the medical record. PMID:18373151
The challenges in defining and measuring diagnostic error.
Zwaan, Laura; Singh, Hardeep
2015-06-01
Diagnostic errors have emerged as a serious patient safety problem but they are hard to detect and complex to define. At the research summit of the 2013 Diagnostic Error in Medicine 6th International Conference, we convened a multidisciplinary expert panel to discuss challenges in defining and measuring diagnostic errors in real-world settings. In this paper, we synthesize these discussions and outline key research challenges in operationalizing the definition and measurement of diagnostic error. Some of these challenges include 1) difficulties in determining error when the disease or diagnosis is evolving over time and in different care settings, 2) accounting for a balance between underdiagnosis and overaggressive diagnostic pursuits, and 3) determining disease diagnosis likelihood and severity in hindsight. We also build on these discussions to describe how some of these challenges can be addressed while conducting research on measuring diagnostic error.
Diagnostic decision-making and strategies to improve diagnosis.
Thammasitboon, Satid; Cutrer, William B
2013-10-01
A significant portion of diagnostic errors arises through cognitive errors resulting from inadequate knowledge, faulty data gathering, and/or faulty verification. Experts estimate that 75% of diagnostic failures can be attributed to clinician diagnostic thinking failure. The cognitive processes that underlie diagnostic thinking of clinicians are complex and intriguing, and it is imperative that clinicians acquire explicit appreciation and application of different cognitive approaches to make decisions better. A dual-process model that unifies many theories of decision-making has emerged as a promising template for understanding how clinicians think and judge efficiently in a diagnostic reasoning process. The identification and implementation of strategies for decreasing or preventing such diagnostic errors has become a growing area of interest and research. Suggested strategies to decrease diagnostic error incidence include increasing clinician's clinical expertise and avoiding inherent cognitive errors to make decisions better. Implementing Interventions focused solely on avoiding errors may work effectively for patient safety issues such as medication errors. Addressing cognitive errors, however, requires equal effort on expanding the individual clinician's expertise. Providing cognitive support to clinicians for robust diagnostic decision-making serves as the final strategic target for decreasing diagnostic errors. Clinical guidelines and algorithms offer another method for streamlining decision-making and decreasing likelihood of cognitive diagnostic errors. Addressing cognitive processing errors is undeniably the most challenging task in reducing diagnostic errors. While many suggested approaches exist, they are mostly based on theories and sciences in cognitive psychology, decision-making, and education. The proposed interventions are primarily suggestions and very few of them have been tested in the actual practice settings. Collaborative research effort is required to effectively address cognitive processing errors. Researchers in various areas, including patient safety/quality improvement, decision-making, and problem solving, must work together to make medical diagnosis more reliable. © 2013 Mosby, Inc. All rights reserved.
Advancing the research agenda for diagnostic error reduction.
Zwaan, Laura; Schiff, Gordon D; Singh, Hardeep
2013-10-01
Diagnostic errors remain an underemphasised and understudied area of patient safety research. We briefly summarise the methods that have been used to conduct research on epidemiology, contributing factors and interventions related to diagnostic error and outline directions for future research. Research methods that have studied epidemiology of diagnostic error provide some estimate on diagnostic error rates. However, there appears to be a large variability in the reported rates due to the heterogeneity of definitions and study methods used. Thus, future methods should focus on obtaining more precise estimates in different settings of care. This would lay the foundation for measuring error rates over time to evaluate improvements. Research methods have studied contributing factors for diagnostic error in both naturalistic and experimental settings. Both approaches have revealed important and complementary information. Newer conceptual models from outside healthcare are needed to advance the depth and rigour of analysis of systems and cognitive insights of causes of error. While the literature has suggested many potentially fruitful interventions for reducing diagnostic errors, most have not been systematically evaluated and/or widely implemented in practice. Research is needed to study promising intervention areas such as enhanced patient involvement in diagnosis, improving diagnosis through the use of electronic tools and identification and reduction of specific diagnostic process 'pitfalls' (eg, failure to conduct appropriate diagnostic evaluation of a breast lump after a 'normal' mammogram). The last decade of research on diagnostic error has made promising steps and laid a foundation for more rigorous methods to advance the field.
Cognitive aspect of diagnostic errors.
Phua, Dong Haur; Tan, Nigel C K
2013-01-01
Diagnostic errors can result in tangible harm to patients. Despite our advances in medicine, the mental processes required to make a diagnosis exhibits shortcomings, causing diagnostic errors. Cognitive factors are found to be an important cause of diagnostic errors. With new understanding from psychology and social sciences, clinical medicine is now beginning to appreciate that our clinical reasoning can take the form of analytical reasoning or heuristics. Different factors like cognitive biases and affective influences can also impel unwary clinicians to make diagnostic errors. Various strategies have been proposed to reduce the effect of cognitive biases and affective influences when clinicians make diagnoses; however evidence for the efficacy of these methods is still sparse. This paper aims to introduce the reader to the cognitive aspect of diagnostic errors, in the hope that clinicians can use this knowledge to improve diagnostic accuracy and patient outcomes.
Diagnostic Errors in Ambulatory Care: Dimensions and Preventive Strategies
ERIC Educational Resources Information Center
Singh, Hardeep; Weingart, Saul N.
2009-01-01
Despite an increasing focus on patient safety in ambulatory care, progress in understanding and reducing diagnostic errors in this setting lag behind many other safety concerns such as medication errors. To explore the extent and nature of diagnostic errors in ambulatory care, we identified five dimensions of ambulatory care from which errors may…
Missed opportunities for diagnosis: lessons learned from diagnostic errors in primary care.
Goyder, Clare R; Jones, Caroline H D; Heneghan, Carl J; Thompson, Matthew J
2015-12-01
Because of the difficulties inherent in diagnosis in primary care, it is inevitable that diagnostic errors will occur. However, despite the important consequences associated with diagnostic errors and their estimated high prevalence, teaching and research on diagnostic error is a neglected area. To ascertain the key learning points from GPs' experiences of diagnostic errors and approaches to clinical decision making associated with these. Secondary analysis of 36 qualitative interviews with GPs in Oxfordshire, UK. Two datasets of semi-structured interviews were combined. Questions focused on GPs' experiences of diagnosis and diagnostic errors (or near misses) in routine primary care and out of hours. Interviews were audiorecorded, transcribed verbatim, and analysed thematically. Learning points include GPs' reliance on 'pattern recognition' and the failure of this strategy to identify atypical presentations; the importance of considering all potentially serious conditions using a 'restricted rule out' approach; and identifying and acting on a sense of unease. Strategies to help manage uncertainty in primary care were also discussed. Learning from previous examples of diagnostic errors is essential if these events are to be reduced in the future and this should be incorporated into GP training. At a practice level, learning points from experiences of diagnostic errors should be discussed more frequently; and more should be done to integrate these lessons nationally to understand and characterise diagnostic errors. © British Journal of General Practice 2015.
Using Fault Trees to Advance Understanding of Diagnostic Errors.
Rogith, Deevakar; Iyengar, M Sriram; Singh, Hardeep
2017-11-01
Diagnostic errors annually affect at least 5% of adults in the outpatient setting in the United States. Formal analytic techniques are only infrequently used to understand them, in part because of the complexity of diagnostic processes and clinical work flows involved. In this article, diagnostic errors were modeled using fault tree analysis (FTA), a form of root cause analysis that has been successfully used in other high-complexity, high-risk contexts. How factors contributing to diagnostic errors can be systematically modeled by FTA to inform error understanding and error prevention is demonstrated. A team of three experts reviewed 10 published cases of diagnostic error and constructed fault trees. The fault trees were modeled according to currently available conceptual frameworks characterizing diagnostic error. The 10 trees were then synthesized into a single fault tree to identify common contributing factors and pathways leading to diagnostic error. FTA is a visual, structured, deductive approach that depicts the temporal sequence of events and their interactions in a formal logical hierarchy. The visual FTA enables easier understanding of causative processes and cognitive and system factors, as well as rapid identification of common pathways and interactions in a unified fashion. In addition, it enables calculation of empirical estimates for causative pathways. Thus, fault trees might provide a useful framework for both quantitative and qualitative analysis of diagnostic errors. Future directions include establishing validity and reliability by modeling a wider range of error cases, conducting quantitative evaluations, and undertaking deeper exploration of other FTA capabilities. Copyright © 2017 The Joint Commission. Published by Elsevier Inc. All rights reserved.
Clinical Dental Faculty Members' Perceptions of Diagnostic Errors and How to Avoid Them.
Nikdel, Cathy; Nikdel, Kian; Ibarra-Noriega, Ana; Kalenderian, Elsbeth; Walji, Muhammad F
2018-04-01
Diagnostic errors are increasingly recognized as a source of preventable harm in medicine, yet little is known about their occurrence in dentistry. The aim of this study was to gain a deeper understanding of clinical dental faculty members' perceptions of diagnostic errors, types of errors that may occur, and possible contributing factors. The authors conducted semi-structured interviews with ten domain experts at one U.S. dental school in May-August 2016 about their perceptions of diagnostic errors and their causes. The interviews were analyzed using an inductive process to identify themes and key findings. The results showed that the participants varied in their definitions of diagnostic errors. While all identified missed diagnosis and wrong diagnosis, only four participants perceived that a delay in diagnosis was a diagnostic error. Some participants perceived that an error occurs only when the choice of treatment leads to harm. Contributing factors associated with diagnostic errors included the knowledge and skills of the dentist, not taking adequate time, lack of communication among colleagues, and cognitive biases such as premature closure based on previous experience. Strategies suggested by the participants to prevent these errors were taking adequate time when investigating a case, forming study groups, increasing communication, and putting more emphasis on differential diagnosis. These interviews revealed differing perceptions of dental diagnostic errors among clinical dental faculty members. To address the variations, the authors recommend adopting shared language developed by the medical profession to increase understanding.
Diagnostic Error in Correctional Mental Health: Prevalence, Causes, and Consequences.
Martin, Michael S; Hynes, Katie; Hatcher, Simon; Colman, Ian
2016-04-01
While they have important implications for inmates and resourcing of correctional institutions, diagnostic errors are rarely discussed in correctional mental health research. This review seeks to estimate the prevalence of diagnostic errors in prisons and jails and explores potential causes and consequences. Diagnostic errors are defined as discrepancies in an inmate's diagnostic status depending on who is responsible for conducting the assessment and/or the methods used. It is estimated that at least 10% to 15% of all inmates may be incorrectly classified in terms of the presence or absence of a mental illness. Inmate characteristics, relationships with staff, and cognitive errors stemming from the use of heuristics when faced with time constraints are discussed as possible sources of error. A policy example of screening for mental illness at intake to prison is used to illustrate when the risk of diagnostic error might be increased and to explore strategies to mitigate this risk. © The Author(s) 2016.
Information-Gathering Patterns Associated with Higher Rates of Diagnostic Error
ERIC Educational Resources Information Center
Delzell, John E., Jr.; Chumley, Heidi; Webb, Russell; Chakrabarti, Swapan; Relan, Anju
2009-01-01
Diagnostic errors are an important source of medical errors. Problematic information-gathering is a common cause of diagnostic errors among physicians and medical students. The objectives of this study were to (1) determine if medical students' information-gathering patterns formed clusters of similar strategies, and if so (2) to calculate the…
Correcting AUC for Measurement Error.
Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang
2015-12-01
Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.
System Related Interventions to Reduce Diagnostic Error: A Narrative Review
Singh, Hardeep; Graber, Mark L.; Kissam, Stephanie M.; Sorensen, Asta V.; Lenfestey, Nancy F.; Tant, Elizabeth M.; Henriksen, Kerm; LaBresh, Kenneth A.
2013-01-01
Background Diagnostic errors (missed, delayed, or wrong diagnosis) have gained recent attention and are associated with significant preventable morbidity and mortality. We reviewed the recent literature to identify interventions that have been, or could be, implemented to address systems-related factors that contribute directly to diagnostic error. Methods We conducted a comprehensive search using multiple search strategies. We first identified candidate articles in English between 2000 and 2009 from a PubMed search that exclusively evaluated for articles related to diagnostic error or delay. We then sought additional papers from references in the initial dataset, searches of additional databases, and subject matter experts. Articles were included if they formally evaluated an intervention to prevent or reduce diagnostic error; however, we also included papers if interventions were suggested and not tested in order to inform the state-of-the science on the topic. We categorized interventions according to the step in the diagnostic process they targeted: patient-provider encounter, performance and interpretation of diagnostic tests, follow-up and tracking of diagnostic information, subspecialty and referral-related; and patient-specific. Results We identified 43 articles for full review, of which 6 reported tested interventions and 37 contained suggestions for possible interventions. Empirical studies, though somewhat positive, were non-experimental or quasi-experimental and included a small number of clinicians or health care sites. Outcome measures in general were underdeveloped and varied markedly between studies, depending on the setting or step in the diagnostic process involved. Conclusions Despite a number of suggested interventions in the literature, few empirical studies have tested interventions to reduce diagnostic error in the last decade. Advancing the science of diagnostic error prevention will require more robust study designs and rigorous definitions of diagnostic processes and outcomes to measure intervention effects. PMID:22129930
The global burden of diagnostic errors in primary care
Singh, Hardeep; Schiff, Gordon D; Graber, Mark L; Onakpoya, Igho; Thompson, Matthew J
2017-01-01
Diagnosis is one of the most important tasks performed by primary care physicians. The World Health Organization (WHO) recently prioritized patient safety areas in primary care, and included diagnostic errors as a high-priority problem. In addition, a recent report from the Institute of Medicine in the USA, ‘Improving Diagnosis in Health Care’, concluded that most people will likely experience a diagnostic error in their lifetime. In this narrative review, we discuss the global significance, burden and contributory factors related to diagnostic errors in primary care. We synthesize available literature to discuss the types of presenting symptoms and conditions most commonly affected. We then summarize interventions based on available data and suggest next steps to reduce the global burden of diagnostic errors. Research suggests that we are unlikely to find a ‘magic bullet’ and confirms the need for a multifaceted approach to understand and address the many systems and cognitive issues involved in diagnostic error. Because errors involve many common conditions and are prevalent across all countries, the WHO’s leadership at a global level will be instrumental to address the problem. Based on our review, we recommend that the WHO consider bringing together primary care leaders, practicing frontline clinicians, safety experts, policymakers, the health IT community, medical education and accreditation organizations, researchers from multiple disciplines, patient advocates, and funding bodies among others, to address the many common challenges and opportunities to reduce diagnostic error. This could lead to prioritization of practice changes needed to improve primary care as well as setting research priorities for intervention development to reduce diagnostic error. PMID:27530239
Identification of factors associated with diagnostic error in primary care.
Minué, Sergio; Bermúdez-Tamayo, Clara; Fernández, Alberto; Martín-Martín, José Jesús; Benítez, Vivian; Melguizo, Miguel; Caro, Araceli; Orgaz, María José; Prados, Miguel Angel; Díaz, José Enrique; Montoro, Rafael
2014-05-12
Missed, delayed or incorrect diagnoses are considered to be diagnostic errors. The aim of this paper is to describe the methodology of a study to analyse cognitive aspects of the process by which primary care (PC) physicians diagnose dyspnoea. It examines the possible links between the use of heuristics, suboptimal cognitive acts and diagnostic errors, using Reason's taxonomy of human error (slips, lapses, mistakes and violations). The influence of situational factors (professional experience, perceived overwork and fatigue) is also analysed. Cohort study of new episodes of dyspnoea in patients receiving care from family physicians and residents at PC centres in Granada (Spain). With an initial expected diagnostic error rate of 20%, and a sampling error of 3%, 384 episodes of dyspnoea are calculated to be required. In addition to filling out the electronic medical record of the patients attended, each physician fills out 2 specially designed questionnaires about the diagnostic process performed in each case of dyspnoea. The first questionnaire includes questions on the physician's initial diagnostic impression, the 3 most likely diagnoses (in order of likelihood), and the diagnosis reached after the initial medical history and physical examination. It also includes items on the physicians' perceived overwork and fatigue during patient care. The second questionnaire records the confirmed diagnosis once it is reached. The complete diagnostic process is peer-reviewed to identify and classify the diagnostic errors. The possible use of heuristics of representativeness, availability, and anchoring and adjustment in each diagnostic process is also analysed. Each audit is reviewed with the physician responsible for the diagnostic process. Finally, logistic regression models are used to determine if there are differences in the diagnostic error variables based on the heuristics identified. This work sets out a new approach to studying the diagnostic decision-making process in PC, taking advantage of new technologies which allow immediate recording of the decision-making process.
Identification of factors associated with diagnostic error in primary care
2014-01-01
Background Missed, delayed or incorrect diagnoses are considered to be diagnostic errors. The aim of this paper is to describe the methodology of a study to analyse cognitive aspects of the process by which primary care (PC) physicians diagnose dyspnoea. It examines the possible links between the use of heuristics, suboptimal cognitive acts and diagnostic errors, using Reason’s taxonomy of human error (slips, lapses, mistakes and violations). The influence of situational factors (professional experience, perceived overwork and fatigue) is also analysed. Methods Cohort study of new episodes of dyspnoea in patients receiving care from family physicians and residents at PC centres in Granada (Spain). With an initial expected diagnostic error rate of 20%, and a sampling error of 3%, 384 episodes of dyspnoea are calculated to be required. In addition to filling out the electronic medical record of the patients attended, each physician fills out 2 specially designed questionnaires about the diagnostic process performed in each case of dyspnoea. The first questionnaire includes questions on the physician’s initial diagnostic impression, the 3 most likely diagnoses (in order of likelihood), and the diagnosis reached after the initial medical history and physical examination. It also includes items on the physicians’ perceived overwork and fatigue during patient care. The second questionnaire records the confirmed diagnosis once it is reached. The complete diagnostic process is peer-reviewed to identify and classify the diagnostic errors. The possible use of heuristics of representativeness, availability, and anchoring and adjustment in each diagnostic process is also analysed. Each audit is reviewed with the physician responsible for the diagnostic process. Finally, logistic regression models are used to determine if there are differences in the diagnostic error variables based on the heuristics identified. Discussion This work sets out a new approach to studying the diagnostic decision-making process in PC, taking advantage of new technologies which allow immediate recording of the decision-making process. PMID:24884984
Dual Processing and Diagnostic Errors
ERIC Educational Resources Information Center
Norman, Geoff
2009-01-01
In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical,…
The global burden of diagnostic errors in primary care.
Singh, Hardeep; Schiff, Gordon D; Graber, Mark L; Onakpoya, Igho; Thompson, Matthew J
2017-06-01
Diagnosis is one of the most important tasks performed by primary care physicians. The World Health Organization (WHO) recently prioritized patient safety areas in primary care, and included diagnostic errors as a high-priority problem. In addition, a recent report from the Institute of Medicine in the USA, 'Improving Diagnosis in Health Care ', concluded that most people will likely experience a diagnostic error in their lifetime. In this narrative review, we discuss the global significance, burden and contributory factors related to diagnostic errors in primary care. We synthesize available literature to discuss the types of presenting symptoms and conditions most commonly affected. We then summarize interventions based on available data and suggest next steps to reduce the global burden of diagnostic errors. Research suggests that we are unlikely to find a 'magic bullet' and confirms the need for a multifaceted approach to understand and address the many systems and cognitive issues involved in diagnostic error. Because errors involve many common conditions and are prevalent across all countries, the WHO's leadership at a global level will be instrumental to address the problem. Based on our review, we recommend that the WHO consider bringing together primary care leaders, practicing frontline clinicians, safety experts, policymakers, the health IT community, medical education and accreditation organizations, researchers from multiple disciplines, patient advocates, and funding bodies among others, to address the many common challenges and opportunities to reduce diagnostic error. This could lead to prioritization of practice changes needed to improve primary care as well as setting research priorities for intervention development to reduce diagnostic error. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
EPs welcome new focus on reducing diagnostic errors.
2015-12-01
Emergency medicine leaders welcome a major new report from the Institute of Medicine (IOM) calling on providers, policy makers, and government agencies to institute changes to reduce the incidence of diagnostic errors. The 369-page report, "Improving Diagnosis in Health Care," states that the rate of diagnostic errors in this country is unacceptably high and offers a long list of recommendations aimed at addressing the problem. These include large, systemic changes that involve improvements in multiple areas, including health information technology (HIT), professional education, teamwork, and payment reform. Further, of particular interest to emergency physicians are recommended changes to the liability system. The authors of the IOM report state that while most people will likely experience a significant diagnostic error in their lifetime, the importance of this problem is under-appreciated. According to conservative estimates, the report says 5% of adults who seek outpatient care each year experience a diagnostic error. The report also notes that research over many decades shows diagnostic errors contribute to roughly 10% of all.deaths. The report says more steps need to be taken to facilitate inter-professional and intra-professional teamwork throughout the diagnostic process. Experts concur with the report's finding that mechanisms need to be developed so that providers receive ongoing feedback on their diagnostic performance.
Tracking Progress in Improving Diagnosis: A Framework for Defining Undesirable Diagnostic Events.
Olson, Andrew P J; Graber, Mark L; Singh, Hardeep
2018-01-29
Diagnostic error is a prevalent, harmful, and costly phenomenon. Multiple national health care and governmental organizations have recently identified the need to improve diagnostic safety as a high priority. A major barrier, however, is the lack of standardized, reliable methods for measuring diagnostic safety. Given the absence of reliable and valid measures for diagnostic errors, we need methods to help establish some type of baseline diagnostic performance across health systems, as well as to enable researchers and health systems to determine the impact of interventions for improving the diagnostic process. Multiple approaches have been suggested but none widely adopted. We propose a new framework for identifying "undesirable diagnostic events" (UDEs) that health systems, professional organizations, and researchers could further define and develop to enable standardized measurement and reporting related to diagnostic safety. We propose an outline for UDEs that identifies both conditions prone to diagnostic error and the contexts of care in which these errors are likely to occur. Refinement and adoption of this framework across health systems can facilitate standardized measurement and reporting of diagnostic safety.
Exploring Situational Awareness in Diagnostic Errors in Primary Care
Singh, Hardeep; Giardina, Traber Davis; Petersen, Laura A.; Smith, Michael; Wilson, Lindsey; Dismukes, Key; Bhagwath, Gayathri; Thomas, Eric J.
2013-01-01
Objective Diagnostic errors in primary care are harmful but poorly studied. To facilitate understanding of diagnostic errors in real-world primary care settings using electronic health records (EHRs), this study explored the use of the Situational Awareness (SA) framework from aviation human factors research. Methods A mixed-methods study was conducted involving reviews of EHR data followed by semi-structured interviews of selected providers from two institutions in the US. The study population included 380 consecutive patients with colorectal and lung cancers diagnosed between February 2008 and January 2009. Using a pre-tested data collection instrument, trained physicians identified diagnostic errors, defined as lack of timely action on one or more established indications for diagnostic work-up for lung and colorectal cancers. Twenty-six providers involved in cases with and without errors were interviewed. Interviews probed for providers' lack of SA and how this may have influenced the diagnostic process. Results Of 254 cases meeting inclusion criteria, errors were found in 30 (32.6%) of 92 lung cancer cases and 56 (33.5%) of 167 colorectal cancer cases. Analysis of interviews related to error cases revealed evidence of lack of one of four levels of SA applicable to primary care practice: information perception, information comprehension, forecasting future events, and choosing appropriate action based on the first three levels. In cases without error, the application of the SA framework provided insight into processes involved in attention management. Conclusions A framework of SA can help analyze and understand diagnostic errors in primary care settings that use EHRs. PMID:21890757
Newman-Toker, David E; Austin, J Matthew; Derk, Jordan; Danforth, Melissa; Graber, Mark L
2017-06-27
A 2015 National Academy of Medicine report on improving diagnosis in health care made recommendations for direct action by hospitals and health systems. Little is known about how health care provider organizations are addressing diagnostic safety/quality. This study is an anonymous online survey of safety professionals from US hospitals and health systems in July-August 2016. The survey was sent to those attending a Leapfrog Group webinar on misdiagnosis (n=188). The instrument was focused on knowledge, attitudes, and capability to address diagnostic errors at the institutional level. Overall, 61 (32%) responded, including community hospitals (42%), integrated health networks (25%), and academic centers (21%). Awareness was high, but commitment and capability were low (31% of leaders understand the problem; 28% have sufficient safety resources; and 25% have made diagnosis a top institutional safety priority). Ongoing efforts to improve diagnostic safety were sparse and mostly included root cause analysis and peer review feedback around diagnostic errors. The top three barriers to addressing diagnostic error were lack of awareness of the problem, lack of measures of diagnostic accuracy and error, and lack of feedback on diagnostic performance. The top two tools viewed as critically important for locally tackling the problem were routine feedback on diagnostic performance and culture change to emphasize diagnostic safety. Although hospitals and health systems appear to be aware of diagnostic errors as a major safety imperative, most organizations (even those that appear to be making a strong commitment to patient safety) are not yet doing much to improve diagnosis. Going forward, efforts to activate health care organizations will be essential to improving diagnostic safety.
Investigating the Link Between Radiologists Gaze, Diagnostic Decision, and Image Content
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tourassi, Georgia; Voisin, Sophie; Paquit, Vincent C
2013-01-01
Objective: To investigate machine learning for linking image content, human perception, cognition, and error in the diagnostic interpretation of mammograms. Methods: Gaze data and diagnostic decisions were collected from six radiologists who reviewed 20 screening mammograms while wearing a head-mounted eye-tracker. Texture analysis was performed in mammographic regions that attracted radiologists attention and in all abnormal regions. Machine learning algorithms were investigated to develop predictive models that link: (i) image content with gaze, (ii) image content and gaze with cognition, and (iii) image content, gaze, and cognition with diagnostic error. Both group-based and individualized models were explored. Results: By poolingmore » the data from all radiologists machine learning produced highly accurate predictive models linking image content, gaze, cognition, and error. Merging radiologists gaze metrics and cognitive opinions with computer-extracted image features identified 59% of the radiologists diagnostic errors while confirming 96.2% of their correct diagnoses. The radiologists individual errors could be adequately predicted by modeling the behavior of their peers. However, personalized tuning appears to be beneficial in many cases to capture more accurately individual behavior. Conclusions: Machine learning algorithms combining image features with radiologists gaze data and diagnostic decisions can be effectively developed to recognize cognitive and perceptual errors associated with the diagnostic interpretation of mammograms.« less
Diagnostic Error in Stroke-Reasons and Proposed Solutions.
Bakradze, Ekaterina; Liberman, Ava L
2018-02-13
We discuss the frequency of stroke misdiagnosis and identify subgroups of stroke at high risk for specific diagnostic errors. In addition, we review common reasons for misdiagnosis and propose solutions to decrease error. According to a recent report by the National Academy of Medicine, most people in the USA are likely to experience a diagnostic error during their lifetimes. Nearly half of such errors result in serious disability and death. Stroke misdiagnosis is a major health care concern, with initial misdiagnosis estimated to occur in 9% of all stroke patients in the emergency setting. Under- or missed diagnosis (false negative) of stroke can result in adverse patient outcomes due to the preclusion of acute treatments and failure to initiate secondary prevention strategies. On the other hand, the overdiagnosis of stroke can result in inappropriate treatment, delayed identification of actual underlying disease, and increased health care costs. Young patients, women, minorities, and patients presenting with non-specific, transient, or posterior circulation stroke symptoms are at increased risk of misdiagnosis. Strategies to decrease diagnostic error in stroke have largely focused on early stroke detection via bedside examination strategies and a clinical decision rules. Targeted interventions to improve the diagnostic accuracy of stroke diagnosis among high-risk groups as well as symptom-specific clinical decision supports are needed. There are a number of open questions in the study of stroke misdiagnosis. To improve patient outcomes, existing strategies to improve stroke diagnostic accuracy should be more broadly adopted and novel interventions devised and tested to reduce diagnostic errors.
Understanding diagnostic errors in medicine: a lesson from aviation
Singh, H; Petersen, L A; Thomas, E J
2006-01-01
The impact of diagnostic errors on patient safety in medicine is increasingly being recognized. Despite the current progress in patient safety research, the understanding of such errors and how to prevent them is inadequate. Preliminary research suggests that diagnostic errors have both cognitive and systems origins. Situational awareness is a model that is primarily used in aviation human factors research that can encompass both the cognitive and the systems roots of such errors. This conceptual model offers a unique perspective in the study of diagnostic errors. The applicability of this model is illustrated by the analysis of a patient whose diagnosis of spinal cord compression was substantially delayed. We suggest how the application of this framework could lead to potential areas of intervention and outline some areas of future research. It is possible that the use of such a model in medicine could help reduce errors in diagnosis and lead to significant improvements in patient care. Further research is needed, including the measurement of situational awareness and correlation with health outcomes. PMID:16751463
Commentary: Reducing diagnostic errors: another role for checklists?
Winters, Bradford D; Aswani, Monica S; Pronovost, Peter J
2011-03-01
Diagnostic errors are a widespread problem, although the true magnitude is unknown because they cannot currently be measured validly. These errors have received relatively little attention despite alarming estimates of associated harm and death. One promising intervention to reduce preventable harm is the checklist. This intervention has proven successful in aviation, in which situations are linear and deterministic (one alarm goes off and a checklist guides the flight crew to evaluate the cause). In health care, problems are multifactorial and complex. A checklist has been used to reduce central-line-associated bloodstream infections in intensive care units. Nevertheless, this checklist was incorporated in a culture-based safety program that engaged and changed behaviors and used robust measurement of infections to evaluate progress. In this issue, Ely and colleagues describe how three checklists could reduce the cognitive biases and mental shortcuts that underlie diagnostic errors, but point out that these tools still need to be tested. To be effective, they must reduce diagnostic errors (efficacy) and be routinely used in practice (effectiveness). Such tools must intuitively support how the human brain works, and under time pressures, clinicians rarely think in conditional probabilities when making decisions. To move forward, it is necessary to accurately measure diagnostic errors (which could come from mapping out the diagnostic process as the medication process has done and measuring errors at each step) and pilot test interventions such as these checklists to determine whether they work.
Using voluntary reports from physicians to learn from diagnostic errors in emergency medicine.
Okafor, Nnaemeka; Payne, Velma L; Chathampally, Yashwant; Miller, Sara; Doshi, Pratik; Singh, Hardeep
2016-04-01
Diagnostic errors are common in the emergency department (ED), but few studies have comprehensively evaluated their types and origins. We analysed incidents reported by ED physicians to determine disease conditions, contributory factors and patient harm associated with ED-related diagnostic errors. Between 1 March 2009 and 31 December 2013, ED physicians reported 509 incidents using a department-specific voluntary incident-reporting system that we implemented at two large academic hospital-affiliated EDs. For this study, we analysed 209 incidents related to diagnosis. A quality assurance team led by an ED physician champion reviewed each incident and interviewed physicians when necessary to confirm the presence/absence of diagnostic error and to determine the contributory factors. We generated descriptive statistics quantifying disease conditions involved, contributory factors and patient harm from errors. Among the 209 incidents, we identified 214 diagnostic errors associated with 65 unique diseases/conditions, including sepsis (9.6%), acute coronary syndrome (9.1%), fractures (8.6%) and vascular injuries (8.6%). Contributory factors included cognitive (n=317), system related (n=192) and non-remedial (n=106). Cognitive factors included faulty information verification (41.3%) and faulty information processing (30.6%) whereas system factors included high workload (34.4%) and inefficient ED processes (40.1%). Non-remediable factors included atypical presentation (31.3%) and the patients' inability to provide a history (31.3%). Most errors (75%) involved multiple factors. Major harm was associated with 34/209 (16.3%) of reported incidents. Most diagnostic errors in ED appeared to relate to common disease conditions. While sustaining diagnostic error reporting programmes might be challenging, our analysis reveals the potential value of such systems in identifying targets for improving patient safety in the ED. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Diagnosis is a team sport - partnering with allied health professionals to reduce diagnostic errors.
Thomas, Dana B; Newman-Toker, David E
2016-06-01
Diagnostic errors are the most common, most costly, and most catastrophic of medical errors. Interdisciplinary teamwork has been shown to reduce harm from therapeutic errors, but sociocultural barriers may impact the engagement of allied health professionals (AHPs) in the diagnostic process. A qualitative case study of the experience at a single institution around involvement of an AHP in the diagnostic process for acute dizziness and vertigo. We detail five diagnostic error cases in which the input of a physical therapist was central to correct diagnosis. We further describe evolution of the sociocultural milieu at the institution as relates to AHP engagement in diagnosis. Five patients with acute vestibular symptoms were initially misdiagnosed by physicians and then correctly diagnosed based on input from a vestibular physical therapist. These included missed labyrinthine concussion and post-traumatic benign paroxysmal positional vertigo (BPPV); BPPV called gastroenteritis; BPPV called stroke; stroke called BPPV; and multiple sclerosis called BPPV. As a consequence of surfacing these diagnostic errors, initial resistance to physical therapy input to aid medical diagnosis has gradually declined, creating a more collaborative environment for 'team diagnosis' of patients with dizziness and vertigo at the institution. Barriers to AHP engagement in 'team diagnosis' include sociocultural norms that establish medical diagnosis as something reserved only for physicians. Drawing attention to the valuable diagnostic contributions of AHPs may help facilitate cultural change. Future studies should seek to measure diagnostic safety culture and then implement proven strategies to breakdown sociocultural barriers that inhibit effective teamwork and transdisciplinary diagnosis.
Thomas, Dana B; Newman-Toker, David E
2016-06-01
Diagnostic errors are the most common, most costly, and most catastrophic of medical errors. Interdisciplinary teamwork has been shown to reduce harm from therapeutic errors, but sociocultural barriers may impact the engagement of allied health professionals (AHPs) in the diagnostic process. A qualitative case study of the experience at a single institution around involvement of an AHP in the diagnostic process for acute dizziness and vertigo. We detail five diagnostic error cases in which the input of a physical therapist was central to correct diagnosis. We further describe evolution of the sociocultural milieu at the institution as relates to AHP engagement in diagnosis. Five patients with acute vestibular symptoms were initially misdiagnosed by physicians and then correctly diagnosed based on input from a vestibular physical therapist. These included missed labyrinthine concussion and post-traumatic benign paroxysmal positional vertigo (BPPV); BPPV called gastroenteritis; BPPV called stroke; stroke called BPPV; and multiple sclerosis called BPPV. As a consequence of surfacing these diagnostic errors, initial resistance to physical therapy input to aid medical diagnosis has gradually declined, creating a more collaborative environment for 'team diagnosis' of patients with dizziness and vertigo at the institution. Barriers to AHP engagement in 'team diagnosis' include sociocultural norms that establish medical diagnosis as something reserved only for physicians. Drawing attention to the valuable diagnostic contributions of AHPs may help facilitate cultural change. Future studies should seek to measure diagnostic safety culture and then implement proven strategies to breakdown sociocultural barriers that inhibit effective teamwork and transdisciplinary diagnosis.
Effectiveness of Toyota process redesign in reducing thyroid gland fine-needle aspiration error.
Raab, Stephen S; Grzybicki, Dana Marie; Sudilovsky, Daniel; Balassanian, Ronald; Janosky, Janine E; Vrbin, Colleen M
2006-10-01
Our objective was to determine whether the Toyota Production System process redesign resulted in diagnostic error reduction for patients who underwent cytologic evaluation of thyroid nodules. In this longitudinal, nonconcurrent cohort study, we compared the diagnostic error frequency of a thyroid aspiration service before and after implementation of error reduction initiatives consisting of adoption of a standardized diagnostic terminology scheme and an immediate interpretation service. A total of 2,424 patients underwent aspiration. Following terminology standardization, the false-negative rate decreased from 41.8% to 19.1% (P = .006), the specimen nondiagnostic rate increased from 5.8% to 19.8% (P < .001), and the sensitivity increased from 70.2% to 90.6% (P < .001). Cases with an immediate interpretation had a lower noninterpretable specimen rate than those without immediate interpretation (P < .001). Toyota process change led to significantly fewer diagnostic errors for patients who underwent thyroid fine-needle aspiration.
Dual processing and diagnostic errors.
Norman, Geoff
2009-09-01
In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical, conscious, and conceptual process, called System 2. Exemplar theories of categorization propose that many category decisions in everyday life are made by unconscious matching to a particular example in memory, and these remain available and retrievable individually. I then review studies of clinical reasoning based on these theories, and show that the two processes are equally effective; System 1, despite its reliance in idiosyncratic, individual experience, is no more prone to cognitive bias or diagnostic error than System 2. Further, I review evidence that instructions directed at encouraging the clinician to explicitly use both strategies can lead to consistent reduction in error rates.
Errors in imaging patients in the emergency setting
Reginelli, Alfonso; Lo Re, Giuseppe; Midiri, Federico; Muzj, Carlo; Romano, Luigia; Brunese, Luca
2016-01-01
Emergency and trauma care produces a “perfect storm” for radiological errors: uncooperative patients, inadequate histories, time-critical decisions, concurrent tasks and often junior personnel working after hours in busy emergency departments. The main cause of diagnostic errors in the emergency department is the failure to correctly interpret radiographs, and the majority of diagnoses missed on radiographs are fractures. Missed diagnoses potentially have important consequences for patients, clinicians and radiologists. Radiologists play a pivotal role in the diagnostic assessment of polytrauma patients and of patients with non-traumatic craniothoracoabdominal emergencies, and key elements to reduce errors in the emergency setting are knowledge, experience and the correct application of imaging protocols. This article aims to highlight the definition and classification of errors in radiology, the causes of errors in emergency radiology and the spectrum of diagnostic errors in radiography, ultrasonography and CT in the emergency setting. PMID:26838955
Errors in imaging patients in the emergency setting.
Pinto, Antonio; Reginelli, Alfonso; Pinto, Fabio; Lo Re, Giuseppe; Midiri, Federico; Muzj, Carlo; Romano, Luigia; Brunese, Luca
2016-01-01
Emergency and trauma care produces a "perfect storm" for radiological errors: uncooperative patients, inadequate histories, time-critical decisions, concurrent tasks and often junior personnel working after hours in busy emergency departments. The main cause of diagnostic errors in the emergency department is the failure to correctly interpret radiographs, and the majority of diagnoses missed on radiographs are fractures. Missed diagnoses potentially have important consequences for patients, clinicians and radiologists. Radiologists play a pivotal role in the diagnostic assessment of polytrauma patients and of patients with non-traumatic craniothoracoabdominal emergencies, and key elements to reduce errors in the emergency setting are knowledge, experience and the correct application of imaging protocols. This article aims to highlight the definition and classification of errors in radiology, the causes of errors in emergency radiology and the spectrum of diagnostic errors in radiography, ultrasonography and CT in the emergency setting.
Interceptive Beam Diagnostics - Signal Creation and Materials Interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plum, Michael; Spallation Neutron Source, Oak Ridge National Laboratory, P.O. Box 2008, Oak Ridge, TN
2004-11-10
The focus of this tutorial will be on interceptive beam diagnostics such as wire scanners, screens, and harps. We will start with an overview of the various ways beams interact with materials to create signals useful for beam diagnostics systems. We will then discuss the errors in a harp or wire scanner profile measurement caused by errors in wire position, number of samples, and signal errors. Finally we will apply our results to two design examples-the SNS wire scanner system and the SNS target harp.
Educational agenda for diagnostic error reduction
Trowbridge, Robert L; Dhaliwal, Gurpreet; Cosby, Karen S
2013-01-01
Diagnostic errors are a major patient safety concern. Although the majority of diagnostic errors are partially attributable to cognitive mistakes, the most effective means of improving clinician cognition in order to achieve gains in diagnostic reliability are unclear. We propose a tripartite educational agenda for improving diagnostic performance among students, residents and practising physicians. This agenda includes strengthening the metacognitive abilities of clinicians, fostering intuitive reasoning and increasing awareness of the role of systems in the diagnostic process. The evidence supporting initiatives in each of these realms is reviewed and a course of future implementation and study is proposed. The barriers to designing and implementing this agenda are substantial and include limited evidence supporting these initiatives and the challenges of changing the practice patterns of practising physicians. Implementation will need to be accompanied by rigorous evaluation. PMID:23764435
Dynamic diagnostics of the error fields in tokamaks
NASA Astrophysics Data System (ADS)
Pustovitov, V. D.
2007-07-01
The error field diagnostics based on magnetic measurements outside the plasma is discussed. The analysed methods rely on measuring the plasma dynamic response to the finite-amplitude external magnetic perturbations, which are the error fields and the pre-programmed probing pulses. Such pulses can be created by the coils designed for static error field correction and for stabilization of the resistive wall modes, the technique developed and applied in several tokamaks, including DIII-D and JET. Here analysis is based on the theory predictions for the resonant field amplification (RFA). To achieve the desired level of the error field correction in tokamaks, the diagnostics must be sensitive to signals of several Gauss. Therefore, part of the measurements should be performed near the plasma stability boundary, where the RFA effect is stronger. While the proximity to the marginal stability is important, the absolute values of plasma parameters are not. This means that the necessary measurements can be done in the diagnostic discharges with parameters below the nominal operating regimes, with the stability boundary intentionally lowered. The estimates for ITER are presented. The discussed diagnostics can be tested in dedicated experiments in existing tokamaks. The diagnostics can be considered as an extension of the 'active MHD spectroscopy' used recently in the DIII-D tokamak and the EXTRAP T2R reversed field pinch.
Looking for trouble? Diagnostics expanding disease and producing patients.
Hofmann, Bjørn
2018-05-23
Novel tests give great opportunities for earlier and more precise diagnostics. At the same time, new tests expand disease, produce patients, and cause unnecessary harm in overdiagnosis and overtreatment. How can we evaluate diagnostics to obtain the benefits and avoid harm? One way is to pay close attention to the diagnostic process and its core concepts. Doing so reveals 3 errors that expand disease and increase overdiagnosis. The first error is to decouple diagnostics from harm, eg, by diagnosing insignificant conditions. The second error is to bypass proper validation of the relationship between test indicator and disease, eg, by introducing biomarkers for Alzheimer's disease before the tests are properly validated. The third error is to couple the name of disease to insignificant or indecisive indicators, eg, by lending the cancer name to preconditions, such as ductal carcinoma in situ. We need to avoid these errors to promote beneficial testing, bar harmful diagnostics, and evade unwarranted expansion of disease. Accordingly, we must stop identifying and testing for conditions that are only remotely associated with harm. We need more stringent verification of tests, and we must avoid naming indicators and indicative conditions after diseases. If not, we will end like ancient tragic heroes, succumbing because of our very best abilities. © 2018 John Wiley & Sons, Ltd.
Heuristics and Cognitive Error in Medical Imaging.
Itri, Jason N; Patel, Sohil H
2018-05-01
The field of cognitive science has provided important insights into mental processes underlying the interpretation of imaging examinations. Despite these insights, diagnostic error remains a major obstacle in the goal to improve quality in radiology. In this article, we describe several types of cognitive bias that lead to diagnostic errors in imaging and discuss approaches to mitigate cognitive biases and diagnostic error. Radiologists rely on heuristic principles to reduce complex tasks of assessing probabilities and predicting values into simpler judgmental operations. These mental shortcuts allow rapid problem solving based on assumptions and past experiences. Heuristics used in the interpretation of imaging studies are generally helpful but can sometimes result in cognitive biases that lead to significant errors. An understanding of the causes of cognitive biases can lead to the development of educational content and systematic improvements that mitigate errors and improve the quality of care provided by radiologists.
Error Consistency in Acquired Apraxia of Speech with Aphasia: Effects of the Analysis Unit
ERIC Educational Resources Information Center
Haley, Katarina L.; Cunningham, Kevin T.; Eaton, Catherine Torrington; Jacks, Adam
2018-01-01
Purpose: Diagnostic recommendations for acquired apraxia of speech (AOS) have been contradictory concerning whether speech sound errors are consistent or variable. Studies have reported divergent findings that, on face value, could argue either for or against error consistency as a diagnostic criterion. The purpose of this study was to explain…
[Cognitive errors in diagnostic decision making].
Gäbler, Martin
2017-10-01
Approximately 10-15% of our diagnostic decisions are faulty and may lead to unfavorable and dangerous outcomes, which could be avoided. These diagnostic errors are mainly caused by cognitive biases in the diagnostic reasoning process.Our medical diagnostic decision-making is based on intuitive "System 1" and analytical "System 2" diagnostic decision-making and can be deviated by unconscious cognitive biases.These deviations can be positively influenced on a systemic and an individual level. For the individual, metacognition (internal withdrawal from the decision-making process) and debiasing strategies, such as verification, falsification and rule out worst-case scenarios, can lead to improved diagnostic decisions making.
ERIC Educational Resources Information Center
Weaver, Sallie J.; Newman-Toker, David E.; Rosen, Michael A.
2012-01-01
Missed, delayed, or wrong diagnoses can have a severe impact on patients, providers, and the entire health care system. One mechanism implicated in such diagnostic errors is the deterioration of cognitive diagnostic skills that are used rarely or not at all over a prolonged period of time. Existing evidence regarding maintenance of effective…
[The factors affecting the results of mechanical jaundice management].
Malkov, I S; Shaimardanov, R Sh; Korobkov, V N; Filippov, V A; Khisamiev, I G
To improve the results of obstructive jaundice management by rational diagnostic and treatment strategies. Outcomes of 820 patients with obstructive jaundice syndrome were analyzed. Diagnostic and tactical mistakes were made at pre-hospital stage in 143 (17.4%) patients and in 105 (12.8%) at hospital stage. Herewith, in 53 (6.5%) cases the errors were observed at all stages. Retrospective analysis of severe postoperative complications and lethal outcomes in patients with obstructive jaundice showed that in 23.8% of cases they were explained by diagnostic and tactical mistakes at various stages of examination and treatment. We developed an algorithm for obstructive jaundice management to reduce the number of diagnostic and tactical errors, a reduction in the frequency of diagnostic and tactical errors. It reduced the number of postoperative complications up to 16.5% and mortality rate to 3.0%.
Benefit-risk Evaluation for Diagnostics: A Framework (BED-FRAME).
Evans, Scott R; Pennello, Gene; Pantoja-Galicia, Norberto; Jiang, Hongyu; Hujer, Andrea M; Hujer, Kristine M; Manca, Claudia; Hill, Carol; Jacobs, Michael R; Chen, Liang; Patel, Robin; Kreiswirth, Barry N; Bonomo, Robert A
2016-09-15
The medical community needs systematic and pragmatic approaches for evaluating the benefit-risk trade-offs of diagnostics that assist in medical decision making. Benefit-Risk Evaluation of Diagnostics: A Framework (BED-FRAME) is a strategy for pragmatic evaluation of diagnostics designed to supplement traditional approaches. BED-FRAME evaluates diagnostic yield and addresses 2 key issues: (1) that diagnostic yield depends on prevalence, and (2) that different diagnostic errors carry different clinical consequences. As such, evaluating and comparing diagnostics depends on prevalence and the relative importance of potential errors. BED-FRAME provides a tool for communicating the expected clinical impact of diagnostic application and the expected trade-offs of diagnostic alternatives. BED-FRAME is a useful fundamental supplement to the standard analysis of diagnostic studies that will aid in clinical decision making. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail journals.permissions@oup.com.
Influence of ECG measurement accuracy on ECG diagnostic statements.
Zywietz, C; Celikag, D; Joseph, G
1996-01-01
Computer analysis of electrocardiograms (ECGs) provides a large amount of ECG measurement data, which may be used for diagnostic classification and storage in ECG databases. Until now, neither error limits for ECG measurements have been specified nor has their influence on diagnostic statements been systematically investigated. An analytical method is presented to estimate the influence of measurement errors on the accuracy of diagnostic ECG statements. Systematic (offset) errors will usually result in an increase of false positive or false negative statements since they cause a shift of the working point on the receiver operating characteristics curve. Measurement error dispersion broadens the distribution function of discriminative measurement parameters and, therefore, usually increases the overlap between discriminative parameters. This results in a flattening of the receiver operating characteristics curve and an increase of false positive and false negative classifications. The method developed has been applied to ECG conduction defect diagnoses by using the proposed International Electrotechnical Commission's interval measurement tolerance limits. These limits appear too large because more than 30% of false positive atrial conduction defect statements and 10-18% of false intraventricular conduction defect statements could be expected due to tolerated measurement errors. To assure long-term usability of ECG measurement databases, it is recommended that systems provide its error tolerance limits obtained on a defined test set.
Reexamining our bias against heuristics.
McLaughlin, Kevin; Eva, Kevin W; Norman, Geoff R
2014-08-01
Using heuristics offers several cognitive advantages, such as increased speed and reduced effort when making decisions, in addition to allowing us to make decision in situations where missing data do not allow for formal reasoning. But the traditional view of heuristics is that they trade accuracy for efficiency. Here the authors discuss sources of bias in the literature implicating the use of heuristics in diagnostic error and highlight the fact that there are also data suggesting that under certain circumstances using heuristics may lead to better decisions that formal analysis. They suggest that diagnostic error is frequently misattributed to the use of heuristics and propose an alternative view whereby content knowledge is the root cause of diagnostic performance and heuristics lie on the causal pathway between knowledge and diagnostic error or success.
Medical errors in primary care clinics – a cross sectional study
2012-01-01
Background Patient safety is vital in patient care. There is a lack of studies on medical errors in primary care settings. The aim of the study is to determine the extent of diagnostic inaccuracies and management errors in public funded primary care clinics. Methods This was a cross-sectional study conducted in twelve public funded primary care clinics in Malaysia. A total of 1753 medical records were randomly selected in 12 primary care clinics in 2007 and were reviewed by trained family physicians for diagnostic, management and documentation errors, potential errors causing serious harm and likelihood of preventability of such errors. Results The majority of patient encounters (81%) were with medical assistants. Diagnostic errors were present in 3.6% (95% CI: 2.2, 5.0) of medical records and management errors in 53.2% (95% CI: 46.3, 60.2). For management errors, medication errors were present in 41.1% (95% CI: 35.8, 46.4) of records, investigation errors in 21.7% (95% CI: 16.5, 26.8) and decision making errors in 14.5% (95% CI: 10.8, 18.2). A total of 39.9% (95% CI: 33.1, 46.7) of these errors had the potential to cause serious harm. Problems of documentation including illegible handwriting were found in 98.0% (95% CI: 97.0, 99.1) of records. Nearly all errors (93.5%) detected were considered preventable. Conclusions The occurrence of medical errors was high in primary care clinics particularly with documentation and medication errors. Nearly all were preventable. Remedial intervention addressing completeness of documentation and prescriptions are likely to yield reduction of errors. PMID:23267547
Reducing diagnostic errors in medicine: what's the goal?
Graber, Mark; Gordon, Ruthanna; Franklin, Nancy
2002-10-01
This review considers the feasibility of reducing or eliminating the three major categories of diagnostic errors in medicine: "No-fault errors" occur when the disease is silent, presents atypically, or mimics something more common. These errors will inevitably decline as medical science advances, new syndromes are identified, and diseases can be detected more accurately or at earlier stages. These errors can never be eradicated, unfortunately, because new diseases emerge, tests are never perfect, patients are sometimes noncompliant, and physicians will inevitably, at times, choose the most likely diagnosis over the correct one, illustrating the concept of necessary fallibility and the probabilistic nature of choosing a diagnosis. "System errors" play a role when diagnosis is delayed or missed because of latent imperfections in the health care system. These errors can be reduced by system improvements, but can never be eliminated because these improvements lag behind and degrade over time, and each new fix creates the opportunity for novel errors. Tradeoffs also guarantee system errors will persist, when resources are just shifted. "Cognitive errors" reflect misdiagnosis from faulty data collection or interpretation, flawed reasoning, or incomplete knowledge. The limitations of human processing and the inherent biases in using heuristics guarantee that these errors will persist. Opportunities exist, however, for improving the cognitive aspect of diagnosis by adopting system-level changes (e.g., second opinions, decision-support systems, enhanced access to specialists) and by training designed to improve cognition or cognitive awareness. Diagnostic error can be substantially reduced, but never eradicated.
Diagnostic Reasoning and Cognitive Biases of Nurse Practitioners.
Lawson, Thomas N
2018-04-01
Diagnostic reasoning is often used colloquially to describe the process by which nurse practitioners and physicians come to the correct diagnosis, but a rich definition and description of this process has been lacking in the nursing literature. A literature review was conducted with theoretical sampling seeking conceptual insight into diagnostic reasoning. Four common themes emerged: Cognitive Biases and Debiasing Strategies, the Dual Process Theory, Diagnostic Error, and Patient Harm. Relevant cognitive biases are discussed, followed by debiasing strategies and application of the dual process theory to reduce diagnostic error and harm. The accuracy of diagnostic reasoning of nurse practitioners may be improved by incorporating these items into nurse practitioner education and practice. [J Nurs Educ. 2018;57(4):203-208.]. Copyright 2018, SLACK Incorporated.
Philosophy of science and the diagnostic process.
Willis, Brian H; Beebee, Helen; Lasserson, Daniel S
2013-10-01
This is an overview of the principles that underpin philosophy of science and how they may provide a framework for the diagnostic process. Although philosophy dates back to antiquity, it is only more recently that philosophers have begun to enunciate the scientific method. Since Aristotle formulated deduction, other modes of reasoning including induction, inference to best explanation, falsificationism, theory-laden observations and Bayesian inference have emerged. Thus, rather than representing a single overriding dogma, the scientific method is a toolkit of ideas and principles of reasoning. Here we demonstrate that the diagnostic process is an example of science in action and is therefore subject to the principles encompassed by the scientific method. Although a number of the different forms of reasoning are used readily by clinicians in practice, without a clear understanding of their pitfalls and the assumptions on which they are based, it leaves doctors open to diagnostic error. We conclude by providing a case example from the medico-legal literature in which diagnostic errors were made, to illustrate how applying the scientific method may mitigate the chance for diagnostic error.
The Sensitivity of Adverse Event Cost Estimates to Diagnostic Coding Error
Wardle, Gavin; Wodchis, Walter P; Laporte, Audrey; Anderson, Geoffrey M; Baker, Ross G
2012-01-01
Objective To examine the impact of diagnostic coding error on estimates of hospital costs attributable to adverse events. Data Sources Original and reabstracted medical records of 9,670 complex medical and surgical admissions at 11 hospital corporations in Ontario from 2002 to 2004. Patient specific costs, not including physician payments, were retrieved from the Ontario Case Costing Initiative database. Study Design Adverse events were identified among the original and reabstracted records using ICD10-CA (Canadian adaptation of ICD10) codes flagged as postadmission complications. Propensity score matching and multivariate regression analysis were used to estimate the cost of the adverse events and to determine the sensitivity of cost estimates to diagnostic coding error. Principal Findings Estimates of the cost of the adverse events ranged from $16,008 (metabolic derangement) to $30,176 (upper gastrointestinal bleeding). Coding errors caused the total cost attributable to the adverse events to be underestimated by 16 percent. The impact of coding error on adverse event cost estimates was highly variable at the organizational level. Conclusions Estimates of adverse event costs are highly sensitive to coding error. Adverse event costs may be significantly underestimated if the likelihood of error is ignored. PMID:22091908
Clinical decision-making: heuristics and cognitive biases for the ophthalmologist.
Hussain, Ahsen; Oestreicher, James
Diagnostic errors have a significant impact on health care outcomes and patient care. The underlying causes and development of diagnostic error are complex with flaws in health care systems, as well as human error, playing a role. Cognitive biases and a failure of decision-making shortcuts (heuristics) are human factors that can compromise the diagnostic process. We describe these mechanisms, their role with the clinician, and provide clinical scenarios to highlight the various points at which biases may emerge. We discuss strategies to modify the development and influence of these processes and the vulnerability of heuristics to provide insight and improve clinical outcomes. Copyright © 2017 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Bowe, Melissa; Sellers, Tyra P.
2018-01-01
The Performance Diagnostic Checklist-Human Services (PDC-HS) has been used to assess variables contributing to undesirable staff performance. In this study, three preschool teachers completed the PDC-HS to identify the factors contributing to four paraprofessionals' inaccurate implementation of error-correction procedures during discrete trial…
Analysis of Students' Error in Learning of Quadratic Equations
ERIC Educational Resources Information Center
Zakaria, Effandi; Ibrahim; Maat, Siti Mistima
2010-01-01
The purpose of the study was to determine the students' error in learning quadratic equation. The samples were 30 form three students from a secondary school in Jambi, Indonesia. Diagnostic test was used as the instrument of this study that included three components: factorization, completing the square and quadratic formula. Diagnostic interview…
Norman, Geoffrey R; Monteiro, Sandra D; Sherbino, Jonathan; Ilgen, Jonathan S; Schmidt, Henk G; Mamede, Silvia
2017-01-01
Contemporary theories of clinical reasoning espouse a dual processing model, which consists of a rapid, intuitive component (Type 1) and a slower, logical and analytical component (Type 2). Although the general consensus is that this dual processing model is a valid representation of clinical reasoning, the causes of diagnostic errors remain unclear. Cognitive theories about human memory propose that such errors may arise from both Type 1 and Type 2 reasoning. Errors in Type 1 reasoning may be a consequence of the associative nature of memory, which can lead to cognitive biases. However, the literature indicates that, with increasing expertise (and knowledge), the likelihood of errors decreases. Errors in Type 2 reasoning may result from the limited capacity of working memory, which constrains computational processes. In this article, the authors review the medical literature to answer two substantial questions that arise from this work: (1) To what extent do diagnostic errors originate in Type 1 (intuitive) processes versus in Type 2 (analytical) processes? (2) To what extent are errors a consequence of cognitive biases versus a consequence of knowledge deficits?The literature suggests that both Type 1 and Type 2 processes contribute to errors. Although it is possible to experimentally induce cognitive biases, particularly availability bias, the extent to which these biases actually contribute to diagnostic errors is not well established. Educational strategies directed at the recognition of biases are ineffective in reducing errors; conversely, strategies focused on the reorganization of knowledge to reduce errors have small but consistent benefits.
Educational Diagnostic Assessment.
ERIC Educational Resources Information Center
Bejar, Isaac I.
1984-01-01
Approaches proposed for educational diagnostic assessment are reviewed and identified as deficit assessment and error analysis. The development of diagnostic instruments may require a reexamination of existing psychometric models and development of alternative ones. The psychometric and content demands of diagnostic assessment all but require test…
The current and ideal state of anatomic pathology patient safety.
Raab, Stephen Spencer
2014-01-01
An anatomic pathology diagnostic error may be secondary to a number of active and latent technical and/or cognitive components, which may occur anywhere along the total testing process in clinical and/or laboratory domains. For the pathologist interpretive steps of diagnosis, we examine Kahneman's framework of slow and fast thinking to explain different causes of error in precision (agreement) and in accuracy (truth). The pathologist cognitive diagnostic process involves image pattern recognition and a slow thinking error may be caused by the application of different rationally-constructed mental maps of image criteria/patterns by different pathologists. This type of error is partly related to a system failure in standardizing the application of these maps. A fast thinking error involves the flawed leap from image pattern to incorrect diagnosis. In the ideal state, anatomic pathology systems would target these cognitive error causes as well as the technical latent factors that lead to error.
Investigating the Association of Eye Gaze Pattern and Diagnostic Error in Mammography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Voisin, Sophie; Pinto, Frank M; Xu, Songhua
2013-01-01
The objective of this study was to investigate the association between eye-gaze patterns and the diagnostic accuracy of radiologists for the task of assessing the likelihood of malignancy of mammographic masses. Six radiologists (2 expert breast imagers and 4 Radiology residents of variable training) assessed the likelihood of malignancy of 40 biopsy-proven mammographic masses (20 malignant and 20 benign) on a computer monitor. Eye-gaze data were collected using a commercial remote eye-tracker. Upon reviewing each mass, the radiologists were also asked to provide their assessment regarding the probability of malignancy of the depicted mass as well as a rating regardingmore » the perceived difficulty of the diagnostic task. The collected data were analyzed using established algorithms and various quantitative metrics were extracted to characterize the recorded gaze patterns. The extracted metrics were correlated with the radiologists diagnostic decisions and perceived complexity scores. Results showed that the visual gaze pattern of radiologists varies substantially, not only depending on their experience level but also among individuals. However, some eye gaze metrics appear to correlate with diagnostic error and perceived complexity more consistently. These results suggest that although gaze patterns are generally associated with diagnostic error and the human perceived difficulty of the diagnostic task, there are substantially individual differences that are not explained simply by the experience level of the individual performing the diagnostic task.« less
NASA Astrophysics Data System (ADS)
Krupka, M.; Kalal, M.; Dostal, J.; Dudzak, R.; Juha, L.
2017-08-01
Classical interferometry became widely used method of active optical diagnostics. Its more advanced version, allowing reconstruction of three sets of data from just one especially designed interferogram (so called complex interferogram) was developed in the past and became known as complex interferometry. Along with the phase shift, which can be also retrieved using classical interferometry, the amplitude modifications of the probing part of the diagnostic beam caused by the object under study (to be called the signal amplitude) as well as the contrast of the interference fringes can be retrieved using the complex interferometry approach. In order to partially compensate for errors in the reconstruction due to imperfections in the diagnostic beam intensity structure as well as for errors caused by a non-ideal optical setup of the interferometer itself (including the quality of its optical components), a reference interferogram can be put to a good use. This method of interferogram analysis of experimental data has been successfully implemented in practice. However, in majority of interferometer setups (especially in the case of the ones employing the wavefront division) the probe and the reference part of the diagnostic beam would feature different intensity distributions over their respective cross sections. This introduces additional error into the reconstruction of the signal amplitude and the fringe contrast, which cannot be resolved using the reference interferogram only. In order to deal with this error it was found that additional separately recorded images of the intensity distribution of the probe and the reference part of the diagnostic beam (with no signal present) are needed. For the best results a sufficient shot-to-shot stability of the whole diagnostic system is required. In this paper, efficiency of the complex interferometry approach for obtaining the highest possible accuracy of the signal amplitude reconstruction is verified using the computer generated complex and reference interferograms containing artificially introduced intensity variations in the probe and the reference part of the diagnostic beam. These sets of data are subsequently analyzed and the errors of the signal amplitude reconstruction are evaluated.
Should learners reason one step at a time? A randomised trial of two diagnostic scheme designs.
Blissett, Sarah; Morrison, Deric; McCarty, David; Sibbald, Matthew
2017-04-01
Making a diagnosis can be difficult for learners as they must integrate multiple clinical variables. Diagnostic schemes can help learners with this complex task. A diagnostic scheme is an algorithm that organises possible diagnoses by assigning signs or symptoms (e.g. systolic murmur) to groups of similar diagnoses (e.g. aortic stenosis and aortic sclerosis) and provides distinguishing features to help discriminate between similar diagnoses (e.g. carotid pulse). The current literature does not identify whether scheme layouts should guide learners to reason one step at a time in a terminally branching scheme or weigh multiple variables simultaneously in a hybrid scheme. We compared diagnostic accuracy, perceptual errors and cognitive load using two scheme layouts for cardiac auscultation. Focused on the task of identifying murmurs on Harvey, a cardiopulmonary simulator, 86 internal medicine residents used two scheme layouts. The terminally branching scheme organised the information into single variable decisions. The hybrid scheme combined single variable decisions with a chart integrating multiple distinguishing features. Using a crossover design, participants completed one set of murmurs (diastolic or systolic) with either the terminally branching or the hybrid scheme. The second set of murmurs was completed with the other scheme. A repeated measures manova was performed to compare diagnostic accuracy, perceptual errors and cognitive load between the scheme layouts. There was a main effect of the scheme layout (Wilks' λ = 0.841, F 3,80 = 5.1, p = 0.003). Use of a terminally branching scheme was associated with increased diagnostic accuracy (65 versus 53%, p = 0.02), fewer perceptual errors (0.61 versus 0.98 errors, p = 0.001) and lower cognitive load (3.1 versus 3.5/7, p = 0.023). The terminally branching scheme was associated with improved diagnostic accuracy, fewer perceptual errors and lower cognitive load, suggesting that terminally branching schemes are effective for improving diagnostic accuracy. These findings can inform the design of schemes and other clinical decision aids. © 2017 John Wiley & Sons Ltd and The Association for the Study of Medical Education.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Voisin, Sophie; Pinto, Frank M; Morin-Ducote, Garnetta
2013-01-01
Purpose: The primary aim of the present study was to test the feasibility of predicting diagnostic errors in mammography by merging radiologists gaze behavior and image characteristics. A secondary aim was to investigate group-based and personalized predictive models for radiologists of variable experience levels. Methods: The study was performed for the clinical task of assessing the likelihood of malignancy of mammographic masses. Eye-tracking data and diagnostic decisions for 40 cases were acquired from 4 Radiology residents and 2 breast imaging experts as part of an IRB-approved pilot study. Gaze behavior features were extracted from the eye-tracking data. Computer-generated and BIRADsmore » images features were extracted from the images. Finally, machine learning algorithms were used to merge gaze and image features for predicting human error. Feature selection was thoroughly explored to determine the relative contribution of the various features. Group-based and personalized user modeling was also investigated. Results: Diagnostic error can be predicted reliably by merging gaze behavior characteristics from the radiologist and textural characteristics from the image under review. Leveraging data collected from multiple readers produced a reasonable group model (AUC=0.79). Personalized user modeling was far more accurate for the more experienced readers (average AUC of 0.837 0.029) than for the less experienced ones (average AUC of 0.667 0.099). The best performing group-based and personalized predictive models involved combinations of both gaze and image features. Conclusions: Diagnostic errors in mammography can be predicted reliably by leveraging the radiologists gaze behavior and image content.« less
Should we confirm our clinical diagnostic certainty by autopsies?
Podbregar, M; Voga, G; Krivec, B; Skale, R; Pareznik, R; Gabrscek, L
2001-11-01
To evaluate the frequency of diagnostic errors assessed by autopsies. Retrospective review of medical and pathological records in an 11-bed closed medical intensive care unit (ICU) at a 860-bed general hospital. Patients who died in the ICU between January 1998 and December 1999. Medical diagnoses were rated into three levels of clinical diagnostic certainty: complete certainty (group L1), minor diagnostic uncertainty (group L2), and major diagnostic uncertainty (group L3). The patients were divided into three error groups: group A, the autopsy confirmed the clinical diagnosis; group B, the autopsy demonstrated a new relevant diagnosis which would probably not have influenced the therapy and outcome; group C, the autopsy demonstrated a new relevant diagnosis which would probably have changed the therapy and outcome. The overall mortality was 20.3% (270/1331 patients). Autopsies were performed in 126 patients (46.9% of deaths), more often in younger patients (66.6+/-13.9 years vs 72.7+/-12.0 years, p<0.001), in patients with shorter ICU stay (4.7+/-5.6 days vs 6.7+/-8.7 days, p=0.054), and in patients in group L3 without chronic diseases (15/126 vs 1/144, p<0.001). Fatal but potentially treatable errors [group C, 12 patients (9.5%)] were found in 8.7%, 10.0%, and 10.5% of patients in groups L1, L2, and L3, respectively (NS between groups). An ICU length of stay shorter than 24 h was not related to the frequency of group C errors. Autopsies are performed more often in younger patients without chronic disease and in patients with a low clinical diagnostic certainty. No level of clinical diagnostic certainty could predict the pathological findings.
Kang, Le; Carter, Randy; Darcy, Kathleen; Kauderer, James; Liao, Shu-Yuan
2013-01-01
In this article we use a latent class model (LCM) with prevalence modeled as a function of covariates to assess diagnostic test accuracy in situations where the true disease status is not observed, but observations on three or more conditionally independent diagnostic tests are available. A fast Monte Carlo EM (MCEM) algorithm with binary (disease) diagnostic data is implemented to estimate parameters of interest; namely, sensitivity, specificity, and prevalence of the disease as a function of covariates. To obtain standard errors for confidence interval construction of estimated parameters, the missing information principle is applied to adjust information matrix estimates. We compare the adjusted information matrix based standard error estimates with the bootstrap standard error estimates both obtained using the fast MCEM algorithm through an extensive Monte Carlo study. Simulation demonstrates that the adjusted information matrix approach estimates the standard error similarly with the bootstrap methods under certain scenarios. The bootstrap percentile intervals have satisfactory coverage probabilities. We then apply the LCM analysis to a real data set of 122 subjects from a Gynecologic Oncology Group (GOG) study of significant cervical lesion (S-CL) diagnosis in women with atypical glandular cells of undetermined significance (AGC) to compare the diagnostic accuracy of a histology-based evaluation, a CA-IX biomarker-based test and a human papillomavirus (HPV) DNA test. PMID:24163493
Imperfect practice makes perfect: error management training improves transfer of learning.
Dyre, Liv; Tabor, Ann; Ringsted, Charlotte; Tolsgaard, Martin G
2017-02-01
Traditionally, trainees are instructed to practise with as few errors as possible during simulation-based training. However, transfer of learning may improve if trainees are encouraged to commit errors. The aim of this study was to assess the effects of error management instructions compared with error avoidance instructions during simulation-based ultrasound training. Medical students (n = 60) with no prior ultrasound experience were randomised to error management training (EMT) (n = 32) or error avoidance training (EAT) (n = 28). The EMT group was instructed to deliberately make errors during training. The EAT group was instructed to follow the simulator instructions and to commit as few errors as possible. Training consisted of 3 hours of simulation-based ultrasound training focusing on fetal weight estimation. Simulation-based tests were administered before and after training. Transfer tests were performed on real patients 7-10 days after the completion of training. Primary outcomes were transfer test performance scores and diagnostic accuracy. Secondary outcomes included performance scores and diagnostic accuracy during the simulation-based pre- and post-tests. A total of 56 participants completed the study. On the transfer test, EMT group participants attained higher performance scores (mean score: 67.7%, 95% confidence interval [CI]: 62.4-72.9%) than EAT group members (mean score: 51.7%, 95% CI: 45.8-57.6%) (p < 0.001; Cohen's d = 1.1, 95% CI: 0.5-1.7). There was a moderate improvement in diagnostic accuracy in the EMT group compared with the EAT group (16.7%, 95% CI: 10.2-23.3% weight deviation versus 26.6%, 95% CI: 16.5-36.7% weight deviation [p = 0.082; Cohen's d = 0.46, 95% CI: -0.06 to 1.0]). No significant interaction effects between group and performance improvements between the pre- and post-tests were found in either performance scores (p = 0.25) or diagnostic accuracy (p = 0.09). The provision of error management instructions during simulation-based training improves the transfer of learning to the clinical setting compared with error avoidance instructions. Rather than teaching to avoid errors, the use of errors for learning should be explored further in medical education theory and practice. © 2016 John Wiley & Sons Ltd and The Association for the Study of Medical Education.
Reducing Diagnostic Error with Computer-Based Clinical Decision Support
ERIC Educational Resources Information Center
Greenes, Robert A.
2009-01-01
Information technology approaches to delivering diagnostic clinical decision support (CDS) are the subject of the papers to follow in the proceedings. These will address the history of CDS and present day approaches (Miller), evaluation of diagnostic CDS methods (Friedman), and the role of clinical documentation in supporting diagnostic decision…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Voisin, Sophie; Tourassi, Georgia D.; Pinto, Frank
2013-10-15
Purpose: The primary aim of the present study was to test the feasibility of predicting diagnostic errors in mammography by merging radiologists’ gaze behavior and image characteristics. A secondary aim was to investigate group-based and personalized predictive models for radiologists of variable experience levels.Methods: The study was performed for the clinical task of assessing the likelihood of malignancy of mammographic masses. Eye-tracking data and diagnostic decisions for 40 cases were acquired from four Radiology residents and two breast imaging experts as part of an IRB-approved pilot study. Gaze behavior features were extracted from the eye-tracking data. Computer-generated and BIRADS imagesmore » features were extracted from the images. Finally, machine learning algorithms were used to merge gaze and image features for predicting human error. Feature selection was thoroughly explored to determine the relative contribution of the various features. Group-based and personalized user modeling was also investigated.Results: Machine learning can be used to predict diagnostic error by merging gaze behavior characteristics from the radiologist and textural characteristics from the image under review. Leveraging data collected from multiple readers produced a reasonable group model [area under the ROC curve (AUC) = 0.792 ± 0.030]. Personalized user modeling was far more accurate for the more experienced readers (AUC = 0.837 ± 0.029) than for the less experienced ones (AUC = 0.667 ± 0.099). The best performing group-based and personalized predictive models involved combinations of both gaze and image features.Conclusions: Diagnostic errors in mammography can be predicted to a good extent by leveraging the radiologists’ gaze behavior and image content.« less
First measurements of error fields on W7-X using flux surface mapping
Lazerson, Samuel A.; Otte, Matthias; Bozhenkov, Sergey; ...
2016-08-03
Error fields have been detected and quantified using the flux surface mapping diagnostic system on Wendelstein 7-X (W7-X). A low-field 'more » $${\\rlap{-}\\ \\iota} =1/2$$ ' magnetic configuration ($${\\rlap{-}\\ \\iota} =\\iota /2\\pi $$ ), sensitive to error fields, was developed in order to detect their presence using the flux surface mapping diagnostic. In this configuration, a vacuum flux surface with rotational transform of n/m = 1/2 is created at the mid-radius of the vacuum flux surfaces. If no error fields are present a vanishingly small n/m = 5/10 island chain should be present. Modeling indicates that if an n = 1 perturbing field is applied by the trim coils, a large n/m = 1/2 island chain will be opened. This island chain is used to create a perturbation large enough to be imaged by the diagnostic. Phase and amplitude scans of the applied field allow the measurement of a small $$\\sim 0.04$$ m intrinsic island chain with a $${{130}^{\\circ}}$$ phase relative to the first module of the W7-X experiment. Lastly, these error fields are determined to be small and easily correctable by the trim coil system.« less
A real-time diagnostic and performance monitor for UNIX. M.S. Thesis
NASA Technical Reports Server (NTRS)
Dong, Hongchao
1992-01-01
There are now over one million UNIX sites and the pace at which new installations are added is steadily increasing. Along with this increase, comes a need to develop simple efficient, effective and adaptable ways of simultaneously collecting real-time diagnostic and performance data. This need exists because distributed systems can give rise to complex failure situations that are often un-identifiable with single-machine diagnostic software. The simultaneous collection of error and performance data is also important for research in failure prediction and error/performance studies. This paper introduces a portable method to concurrently collect real-time diagnostic and performance data on a distributed UNIX system. The combined diagnostic/performance data collection is implemented on a distributed multi-computer system using SUN4's as servers. The approach uses existing UNIX system facilities to gather system dependability information such as error and crash reports. In addition, performance data such as CPU utilization, disk usage, I/O transfer rate and network contention is also collected. In the future, the collected data will be used to identify dependability bottlenecks and to analyze the impact of failures on system performance.
Evaluating concentration estimation errors in ELISA microarray experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daly, Don S.; White, Amanda M.; Varnum, Susan M.
Enzyme-linked immunosorbent assay (ELISA) is a standard immunoassay to predict a protein concentration in a sample. Deploying ELISA in a microarray format permits simultaneous prediction of the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Evaluating prediction error is critical to interpreting biological significance and improving the ELISA microarray process. Evaluating prediction error must be automated to realize a reliable high-throughput ELISA microarray system. Methods: In this paper, we present a statistical method based on propagation of error to evaluate prediction errors in the ELISA microarray process. Althoughmore » propagation of error is central to this method, it is effective only when comparable data are available. Therefore, we briefly discuss the roles of experimental design, data screening, normalization and statistical diagnostics when evaluating ELISA microarray prediction errors. We use an ELISA microarray investigation of breast cancer biomarkers to illustrate the evaluation of prediction errors. The illustration begins with a description of the design and resulting data, followed by a brief discussion of data screening and normalization. In our illustration, we fit a standard curve to the screened and normalized data, review the modeling diagnostics, and apply propagation of error.« less
Harolds, Jay A
2016-09-01
The work system in which diagnosis takes place is affected by the external environment, which includes requirements such as certification, accreditation, and regulations. How errors are reported, malpractice, and the system for payment are some other aspects of the external environment. Improving the external environment is expected to decrease errors in diagnosis. More research on improving the diagnostic process is needed.
Automated detection of heuristics and biases among pathologists in a computer-based system.
Crowley, Rebecca S; Legowski, Elizabeth; Medvedeva, Olga; Reitmeyer, Kayse; Tseytlin, Eugene; Castine, Melissa; Jukic, Drazen; Mello-Thoms, Claudia
2013-08-01
The purpose of this study is threefold: (1) to develop an automated, computer-based method to detect heuristics and biases as pathologists examine virtual slide cases, (2) to measure the frequency and distribution of heuristics and errors across three levels of training, and (3) to examine relationships of heuristics to biases, and biases to diagnostic errors. The authors conducted the study using a computer-based system to view and diagnose virtual slide cases. The software recorded participant responses throughout the diagnostic process, and automatically classified participant actions based on definitions of eight common heuristics and/or biases. The authors measured frequency of heuristic use and bias across three levels of training. Biases studied were detected at varying frequencies, with availability and search satisficing observed most frequently. There were few significant differences by level of training. For representativeness and anchoring, the heuristic was used appropriately as often or more often than it was used in biased judgment. Approximately half of the diagnostic errors were associated with one or more biases. We conclude that heuristic use and biases were observed among physicians at all levels of training using the virtual slide system, although their frequencies varied. The system can be employed to detect heuristic use and to test methods for decreasing diagnostic errors resulting from cognitive biases.
NASA Astrophysics Data System (ADS)
Solazzo, Efisio; Hogrefe, Christian; Colette, Augustin; Garcia-Vivanco, Marta; Galmarini, Stefano
2017-09-01
The work here complements the overview analysis of the modelling systems participating in the third phase of the Air Quality Model Evaluation International Initiative (AQMEII3) by focusing on the performance for hourly surface ozone by two modelling systems, Chimere for Europe and CMAQ for North America. The evaluation strategy outlined in the course of the three phases of the AQMEII activity, aimed to build up a diagnostic methodology for model evaluation, is pursued here and novel diagnostic methods are proposed. In addition to evaluating the base case
simulation in which all model components are configured in their standard mode, the analysis also makes use of sensitivity simulations in which the models have been applied by altering and/or zeroing lateral boundary conditions, emissions of anthropogenic precursors, and ozone dry deposition. To help understand of the causes of model deficiencies, the error components (bias, variance, and covariance) of the base case and of the sensitivity runs are analysed in conjunction with timescale considerations and error modelling using the available error fields of temperature, wind speed, and NOx concentration. The results reveal the effectiveness and diagnostic power of the methods devised (which remains the main scope of this study), allowing the detection of the timescale and the fields that the two models are most sensitive to. The representation of planetary boundary layer (PBL) dynamics is pivotal to both models. In particular, (i) the fluctuations slower than ˜ 1.5 days account for 70-85 % of the mean square error of the full (undecomposed) ozone time series; (ii) a recursive, systematic error with daily periodicity is detected, responsible for 10-20 % of the quadratic total error; (iii) errors in representing the timing of the daily transition between stability regimes in the PBL are responsible for a covariance error as large as 9 ppb (as much as the standard deviation of the network-average ozone observations in summer in both Europe and North America); (iv) the CMAQ ozone error has a weak/negligible dependence on the errors in NO2, while the error in NO2 significantly impacts the ozone error produced by Chimere; (v) the response of the models to variations of anthropogenic emissions and boundary conditions show a pronounced spatial heterogeneity, while the seasonal variability of the response is found to be less marked. Only during the winter season does the zeroing of boundary values for North America produce a spatially uniform deterioration of the model accuracy across the majority of the continent.
MFP scanner motion characterization using self-printed target
NASA Astrophysics Data System (ADS)
Kim, Minwoong; Bauer, Peter; Wagner, Jerry K.; Allebach, Jan P.
2015-01-01
Multifunctional printers (MFP) are products that combine the functions of a printer, scanner, and copier. Our goal is to help customers to be able to easily diagnose scanner or print quality issues with their products by developing an automated diagnostic system embedded in the product. We specifically focus on the characterization of scanner motions, which may be defective due to irregular movements of the scan-head. The novel design of our test page and two-stage diagnostic algorithm are described in this paper. The most challenging issue is to evaluate the scanner performance properly when both printer and scanner units contribute to the motion errors. In the first stage called the uncorrected-print-error-stage, aperiodic and periodic motion behaviors are characterized in both the spatial and frequency domains. Since it is not clear how much of the error is contributed by each unit, the scanned input is statistically analyzed in the second stage called the corrected-print-error-stage. Finally, the described diagnostic algorithms output the estimated scan error and print error separately as RMS values of the displacement of the scan and print lines, respectively, from their nominal positions in the scanner or printer motion direction. We validate our test page design and approaches by ground truth obtained from a high-precision, chrome-on-glass reticle manufactured using semiconductor chip fabrication technologies.
ERIC Educational Resources Information Center
Severo, Milton; Silva-Pereira, Fernanda; Ferreira, Maria Amelia
2013-01-01
Several studies have shown that the standard error of measurement (SEM) can be used as an additional “safety net” to reduce the frequency of false-positive or false-negative student grading classifications. Practical examinations in clinical anatomy are often used as diagnostic tests to admit students to course final examinations. The aim of this…
ERIC Educational Resources Information Center
Arieli-Attali, Meirav; Liu, Ying
2016-01-01
Diagnostic assessment approaches intend to provide fine-grained reports of what students know and can do, focusing on their areas of strengths and weaknesses. However, current application of such diagnostic approaches is limited by the scoring method for item responses; important diagnostic information, such as type of errors and strategy use is…
Measures to Improve Diagnostic Safety in Clinical Practice
Singh, Hardeep; Graber, Mark L; Hofer, Timothy P
2016-01-01
Timely and accurate diagnosis is foundational to good clinical practice and an essential first step to achieving optimal patient outcomes. However, a recent Institute of Medicine report concluded that most of us will experience at least one diagnostic error in our lifetime. The report argues for efforts to improve the reliability of the diagnostic process through better measurement of diagnostic performance. The diagnostic process is a dynamic team-based activity that involves uncertainty, plays out over time, and requires effective communication and collaboration among multiple clinicians, diagnostic services, and the patient. Thus, it poses special challenges for measurement. In this paper, we discuss how the need to develop measures to improve diagnostic performance could move forward at a time when the scientific foundation needed to inform measurement is still evolving. We highlight challenges and opportunities for developing potential measures of “diagnostic safety” related to clinical diagnostic errors and associated preventable diagnostic harm. In doing so, we propose a starter set of measurement concepts for initial consideration that seem reasonably related to diagnostic safety, and call for these to be studied and further refined. This would enable safe diagnosis to become an organizational priority and facilitate quality improvement. Health care systems should consider measurement and evaluation of diagnostic performance as essential to timely and accurate diagnosis and to the reduction of preventable diagnostic harm. PMID:27768655
Spencer, Bruce D
2012-06-01
Latent class models are increasingly used to assess the accuracy of medical diagnostic tests and other classifications when no gold standard is available and the true state is unknown. When the latent class is treated as the true class, the latent class models provide measures of components of accuracy including specificity and sensitivity and their complements, type I and type II error rates. The error rates according to the latent class model differ from the true error rates, however, and empirical comparisons with a gold standard suggest the true error rates often are larger. We investigate conditions under which the true type I and type II error rates are larger than those provided by the latent class models. Results from Uebersax (1988, Psychological Bulletin 104, 405-416) are extended to accommodate random effects and covariates affecting the responses. The results are important for interpreting the results of latent class analyses. An error decomposition is presented that incorporates an error component from invalidity of the latent class model. © 2011, The International Biometric Society.
2017-01-01
Background Laboratory testing is roughly divided into three phases: a pre-analytical phase, an analytical phase and a post-analytical phase. Most analytical errors have been attributed to the analytical phase. However, recent studies have shown that up to 70% of analytical errors reflect the pre-analytical phase. The pre-analytical phase comprises all processes from the time a laboratory request is made by a physician until the specimen is analyzed at the lab. Generally, the pre-analytical phase includes patient preparation, specimen transportation, specimen collection and storage. In the present study, we report the first comprehensive assessment of the frequency and types of pre-analytical errors at the Sulaimani diagnostic labs in Iraqi Kurdistan. Materials and Methods Over 2 months, 5500 venous blood samples were observed in 10 public diagnostic labs of Sulaimani City. The percentages of rejected samples and types of sample inappropriateness were evaluated. The percentage of each of the following pre-analytical errors were recorded: delay in sample transportation, clotted samples, expired reagents, hemolyzed samples, samples not on ice, incorrect sample identification, insufficient sample, tube broken in centrifuge, request procedure errors, sample mix-ups, communication conflicts, misinterpreted orders, lipemic samples, contaminated samples and missed physician’s request orders. The difference between the relative frequencies of errors observed in the hospitals considered was tested using a proportional Z test. In particular, the survey aimed to discover whether analytical errors were recorded and examine the types of platforms used in the selected diagnostic labs. Results The analysis showed a high prevalence of improper sample handling during the pre-analytical phase. In appropriate samples, the percentage error was as high as 39%. The major reasons for rejection were hemolyzed samples (9%), incorrect sample identification (8%) and clotted samples (6%). Most quality control schemes at Sulaimani hospitals focus only on the analytical phase, and none of the pre-analytical errors were recorded. Interestingly, none of the labs were internationally accredited; therefore, corrective actions are needed at these hospitals to ensure better health outcomes. Internal and External Quality Assessment Schemes (EQAS) for the pre-analytical phase at Sulaimani clinical laboratories should be implemented at public hospitals. Furthermore, lab personnel, particularly phlebotomists, need continuous training on the importance of sample quality to obtain accurate test results. PMID:28107395
Najat, Dereen
2017-01-01
Laboratory testing is roughly divided into three phases: a pre-analytical phase, an analytical phase and a post-analytical phase. Most analytical errors have been attributed to the analytical phase. However, recent studies have shown that up to 70% of analytical errors reflect the pre-analytical phase. The pre-analytical phase comprises all processes from the time a laboratory request is made by a physician until the specimen is analyzed at the lab. Generally, the pre-analytical phase includes patient preparation, specimen transportation, specimen collection and storage. In the present study, we report the first comprehensive assessment of the frequency and types of pre-analytical errors at the Sulaimani diagnostic labs in Iraqi Kurdistan. Over 2 months, 5500 venous blood samples were observed in 10 public diagnostic labs of Sulaimani City. The percentages of rejected samples and types of sample inappropriateness were evaluated. The percentage of each of the following pre-analytical errors were recorded: delay in sample transportation, clotted samples, expired reagents, hemolyzed samples, samples not on ice, incorrect sample identification, insufficient sample, tube broken in centrifuge, request procedure errors, sample mix-ups, communication conflicts, misinterpreted orders, lipemic samples, contaminated samples and missed physician's request orders. The difference between the relative frequencies of errors observed in the hospitals considered was tested using a proportional Z test. In particular, the survey aimed to discover whether analytical errors were recorded and examine the types of platforms used in the selected diagnostic labs. The analysis showed a high prevalence of improper sample handling during the pre-analytical phase. In appropriate samples, the percentage error was as high as 39%. The major reasons for rejection were hemolyzed samples (9%), incorrect sample identification (8%) and clotted samples (6%). Most quality control schemes at Sulaimani hospitals focus only on the analytical phase, and none of the pre-analytical errors were recorded. Interestingly, none of the labs were internationally accredited; therefore, corrective actions are needed at these hospitals to ensure better health outcomes. Internal and External Quality Assessment Schemes (EQAS) for the pre-analytical phase at Sulaimani clinical laboratories should be implemented at public hospitals. Furthermore, lab personnel, particularly phlebotomists, need continuous training on the importance of sample quality to obtain accurate test results.
Diagnostic grade wireless ECG monitoring.
Garudadri, Harinath; Chi, Yuejie; Baker, Steve; Majumdar, Somdeb; Baheti, Pawan K; Ballard, Dan
2011-01-01
In remote monitoring of Electrocardiogram (ECG), it is very important to ensure that the diagnostic integrity of signals is not compromised by sensing artifacts and channel errors. It is also important for the sensors to be extremely power efficient to enable wearable form factors and long battery life. We present an application of Compressive Sensing (CS) as an error mitigation scheme at the application layer for wearable, wireless sensors in diagnostic grade remote monitoring of ECG. In our previous work, we described an approach to mitigate errors due to packet losses by projecting ECG data to a random space and recovering a faithful representation using sparse reconstruction methods. Our contributions in this work are twofold. First, we present an efficient hardware implementation of random projection at the sensor. Second, we validate the diagnostic integrity of the reconstructed ECG after packet loss mitigation. We validate our approach on MIT and AHA databases comprising more than 250,000 normal and abnormal beats using EC57 protocols adopted by the Food and Drug Administration (FDA). We show that sensitivity and positive predictivity of a state-of-the-art ECG arrhythmia classifier is essentially invariant under CS based packet loss mitigation for both normal and abnormal beats even at high packet loss rates. In contrast, the performance degrades significantly in the absence of any error mitigation scheme, particularly for abnormal beats such as Ventricular Ectopic Beats (VEB).
Chu, David; Xiao, Jane; Shah, Payal; Todd, Brett
2018-06-20
Cognitive errors are a major contributor to medical error. Traditionally, medical errors at teaching hospitals are analyzed in morbidity and mortality (M&M) conferences. We aimed to describe the frequency of cognitive errors in relation to the occurrence of diagnostic and other error types, in cases presented at an emergency medicine (EM) resident M&M conference. We conducted a retrospective study of all cases presented at a suburban US EM residency monthly M&M conference from September 2011 to August 2016. Each case was reviewed using the electronic medical record (EMR) and notes from the M&M case by two EM physicians. Each case was categorized by type of primary medical error that occurred as described by Okafor et al. When a diagnostic error occurred, the case was reviewed for contributing cognitive and non-cognitive factors. Finally, when a cognitive error occurred, the case was classified into faulty knowledge, faulty data gathering or faulty synthesis, as described by Graber et al. Disagreements in error type were mediated by a third EM physician. A total of 87 M&M cases were reviewed; the two reviewers agreed on 73 cases, and 14 cases required mediation by a third reviewer. Forty-eight cases involved diagnostic errors, 47 of which were cognitive errors. Of these 47 cases, 38 involved faulty synthesis, 22 involved faulty data gathering and only 11 involved faulty knowledge. Twenty cases contained more than one type of cognitive error. Twenty-nine cases involved both a resident and an attending physician, while 17 cases involved only an attending physician. Twenty-one percent of the resident cases involved all three cognitive errors, while none of the attending cases involved all three. Forty-one percent of the resident cases and only 6% of the attending cases involved faulty knowledge. One hundred percent of the resident cases and 94% of the attending cases involved faulty synthesis. Our review of 87 EM M&M cases revealed that cognitive errors are commonly involved in cases presented, and that these errors are less likely due to deficient knowledge and more likely due to faulty synthesis. M&M conferences may therefore provide an excellent forum to discuss cognitive errors and how to reduce their occurrence.
"First, know thyself": cognition and error in medicine.
Elia, Fabrizio; Aprà, Franco; Verhovez, Andrea; Crupi, Vincenzo
2016-04-01
Although error is an integral part of the world of medicine, physicians have always been little inclined to take into account their own mistakes and the extraordinary technological progress observed in the last decades does not seem to have resulted in a significant reduction in the percentage of diagnostic errors. The failure in the reduction in diagnostic errors, notwithstanding the considerable investment in human and economic resources, has paved the way to new strategies which were made available by the development of cognitive psychology, the branch of psychology that aims at understanding the mechanisms of human reasoning. This new approach led us to realize that we are not fully rational agents able to take decisions on the basis of logical and probabilistically appropriate evaluations. In us, two different and mostly independent modes of reasoning coexist: a fast or non-analytical reasoning, which tends to be largely automatic and fast-reactive, and a slow or analytical reasoning, which permits to give rationally founded answers. One of the features of the fast mode of reasoning is the employment of standardized rules, termed "heuristics." Heuristics lead physicians to correct choices in a large percentage of cases. Unfortunately, cases exist wherein the heuristic triggered fails to fit the target problem, so that the fast mode of reasoning can lead us to unreflectively perform actions exposing us and others to variable degrees of risk. Cognitive errors arise as a result of these cases. Our review illustrates how cognitive errors can cause diagnostic problems in clinical practice.
Cardiac examination and the effect of dual-processing instruction in a cardiopulmonary simulator.
Sibbald, Matt; McKinney, James; Cavalcanti, Rodrigo B; Yu, Eric; Wood, David A; Nair, Parvathy; Eva, Kevin W; Hatala, Rose
2013-08-01
Use of dual-processing has been widely touted as a strategy to reduce diagnostic error in clinical medicine. However, this strategy has not been tested among medical trainees with complex diagnostic problems. We sought to determine whether dual-processing instruction could reduce diagnostic error across a spectrum of experience with trainees undertaking cardiac physical exam. Three experiments were conducted using a similar design to teach cardiac physical exam using a cardiopulmonary simulator. One experiment was conducted in each of three groups: experienced, intermediate and novice trainees. In all three experiments, participants were randomized to receive undirected or dual-processing verbal instruction during teaching, practice and testing phases. When tested, dual-processing instruction did not change the probability assigned to the correct diagnosis in any of the three experiments. Among intermediates, there was an apparent interaction between the diagnosis tested and the effect of dual-processing instruction. Among relative novices, dual processing instruction may have dampened the harmful effect of a bias away from the correct diagnosis. Further work is needed to define the role of dual-processing instruction to reduce cognitive error. This study suggests that it cannot be blindly applied to complex diagnostic problems such as cardiac physical exam.
An Instructor's Diagnostic Aid for Feedback in Training.
ERIC Educational Resources Information Center
Andrews, Dee H.; Uliano, Kevin C.
1988-01-01
Instructor's Diagnostic Aid for Feedback in Training (IDAFT) is a computer-assisted method based on error analysis, domains of learning, and events of instruction. Its use with Navy team instructors is currently being explored. (JOW)
NASA Astrophysics Data System (ADS)
Pan, Xingchen; Liu, Cheng; Zhu, Jianqiang
2018-02-01
Coherent modulation imaging providing fast convergence speed and high resolution with single diffraction pattern is a promising technique to satisfy the urgent demands for on-line multiple parameter diagnostics with single setup in high power laser facilities (HPLF). However, the influence of noise on the final calculated parameters concerned has not been investigated yet. According to a series of simulations with twenty different sampling beams generated based on the practical parameters and performance of HPLF, the quantitative analysis based on statistical results was first investigated after considering five different error sources. We found the background noise of detector and high quantization error will seriously affect the final accuracy and different parameters have different sensitivity to different noise sources. The simulation results and the corresponding analysis provide the potential directions to further improve the final accuracy of parameter diagnostics which is critically important to its formal applications in the daily routines of HPLF.
Comparing diagnostic tests on benefit-risk.
Pennello, Gene; Pantoja-Galicia, Norberto; Evans, Scott
2016-01-01
Comparing diagnostic tests on accuracy alone can be inconclusive. For example, a test may have better sensitivity than another test yet worse specificity. Comparing tests on benefit risk may be more conclusive because clinical consequences of diagnostic error are considered. For benefit-risk evaluation, we propose diagnostic yield, the expected distribution of subjects with true positive, false positive, true negative, and false negative test results in a hypothetical population. We construct a table of diagnostic yield that includes the number of false positive subjects experiencing adverse consequences from unnecessary work-up. We then develop a decision theory for evaluating tests. The theory provides additional interpretation to quantities in the diagnostic yield table. It also indicates that the expected utility of a test relative to a perfect test is a weighted accuracy measure, the average of sensitivity and specificity weighted for prevalence and relative importance of false positive and false negative testing errors, also interpretable as the cost-benefit ratio of treating non-diseased and diseased subjects. We propose plots of diagnostic yield, weighted accuracy, and relative net benefit of tests as functions of prevalence or cost-benefit ratio. Concepts are illustrated with hypothetical screening tests for colorectal cancer with test positive subjects being referred to colonoscopy.
Random Versus Nonrandom Peer Review: A Case for More Meaningful Peer Review.
Itri, Jason N; Donithan, Adam; Patel, Sohil H
2018-05-10
Random peer review programs are not optimized to discover cases with diagnostic error and thus have inherent limitations with respect to educational and quality improvement value. Nonrandom peer review offers an alternative approach in which diagnostic error cases are targeted for collection during routine clinical practice. The objective of this study was to compare error cases identified through random and nonrandom peer review approaches at an academic center. During the 1-year study period, the number of discrepancy cases and score of discrepancy were determined from each approach. The nonrandom peer review process collected 190 cases, of which 60 were scored as 2 (minor discrepancy), 94 as 3 (significant discrepancy), and 36 as 4 (major discrepancy). In the random peer review process, 1,690 cases were reviewed, of which 1,646 were scored as 1 (no discrepancy), 44 were scored as 2 (minor discrepancy), and none were scored as 3 or 4. Several teaching lessons and quality improvement measures were developed as a result of analysis of error cases collected through the nonrandom peer review process. Our experience supports the implementation of nonrandom peer review as a replacement to random peer review, with nonrandom peer review serving as a more effective method for collecting diagnostic error cases with educational and quality improvement value. Copyright © 2018 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Li, Yan; Li, Na; Han, Qunying; He, Shuixiang; Bae, Ricard S.; Liu, Zhengwen; Lv, Yi; Shi, Bingyin
2014-01-01
This study was conducted to evaluate the performance of physical examination (PE) skills during our diagnostic medicine course and analyze the characteristics of the data collected to provide information for practical guidance to improve the quality of teaching. Seventy-two fourth-year medical students were enrolled in the study. All received an assessment of PE skills after receiving a 17-week formal training course and systematic teaching. Their performance was evaluated and recorded in detail using a checklist, which included 5 aspects of PE skills: examination techniques, communication and care skills, content items, appropriateness of examination sequence, and time taken. Error frequency and type were designated as the assessment parameters in the survey. The results showed that the distribution and the percentage in examination errors between male and female students and among the different body parts examined were significantly different (p<0.001). The average error frequency per student in females (0.875) was lower than in males (1.375) although the difference was not statistically significant (p = 0.167). The average error frequency per student in cardiac (1.267) and pulmonary (1.389) examinations was higher than in abdominal (0.867) and head, neck and nervous system examinations (0.917). Female students had a lower average error frequency than males in cardiac examinations (p = 0.041). Additionally, error in examination techniques was the highest type of error among the 5 aspects of PE skills irrespective of participant gender and assessment content (p<0.001). These data suggest that PE skills in cardiac and pulmonary examinations and examination techniques may be included in the main focus of improving the teaching of diagnostics in these medical students. PMID:25329685
Li, Yan; Li, Na; Han, Qunying; He, Shuixiang; Bae, Ricard S; Liu, Zhengwen; Lv, Yi; Shi, Bingyin
2014-01-01
This study was conducted to evaluate the performance of physical examination (PE) skills during our diagnostic medicine course and analyze the characteristics of the data collected to provide information for practical guidance to improve the quality of teaching. Seventy-two fourth-year medical students were enrolled in the study. All received an assessment of PE skills after receiving a 17-week formal training course and systematic teaching. Their performance was evaluated and recorded in detail using a checklist, which included 5 aspects of PE skills: examination techniques, communication and care skills, content items, appropriateness of examination sequence, and time taken. Error frequency and type were designated as the assessment parameters in the survey. The results showed that the distribution and the percentage in examination errors between male and female students and among the different body parts examined were significantly different (p<0.001). The average error frequency per student in females (0.875) was lower than in males (1.375) although the difference was not statistically significant (p = 0.167). The average error frequency per student in cardiac (1.267) and pulmonary (1.389) examinations was higher than in abdominal (0.867) and head, neck and nervous system examinations (0.917). Female students had a lower average error frequency than males in cardiac examinations (p = 0.041). Additionally, error in examination techniques was the highest type of error among the 5 aspects of PE skills irrespective of participant gender and assessment content (p<0.001). These data suggest that PE skills in cardiac and pulmonary examinations and examination techniques may be included in the main focus of improving the teaching of diagnostics in these medical students.
ERIC Educational Resources Information Center
Clayman, Deborah P. Goldweber
The ability of 100 second-grade boys and girls to self-correct oral reading errors was studied in relationship to visual-form perception, phonic skills, response speed, and reading level. Each child was tested individually with the Bender-Error Test, the Gray Oral Paragraphs, and the Roswell-Chall Diagnostic Reading Test and placed into a group of…
The biasing effect of clinical history on physical examination diagnostic accuracy.
Sibbald, Matthew; Cavalcanti, Rodrigo B
2011-08-01
Literature on diagnostic test interpretation has shown that access to clinical history can both enhance diagnostic accuracy and increase diagnostic error. Knowledge of clinical history has also been shown to enhance the more complex cognitive task of physical examination diagnosis, possibly by enabling early hypothesis generation. However, it is unclear whether clinicians adhere to these early hypotheses in the face of unexpected physical findings, thus resulting in diagnostic error. A sample of 180 internal medicine residents received a short clinical history and conducted a cardiac physical examination on a high-fidelity simulator. Resident Doctors (Residents) were randomised to three groups based on the physical findings in the simulator. The concordant group received physical examination findings consistent with the diagnosis that was most probable based on the clinical history. Discordant groups received findings associated with plausible alternative diagnoses which either lacked expected findings (indistinct discordant) or contained unexpected findings (distinct discordant). Physical examination diagnostic accuracy and physical examination findings were analysed. Physical examination diagnostic accuracy varied significantly among groups (75 ± 44%, 2 ± 13% and 31 ± 47% in the concordant, indistinct discordant and distinct discordant groups, respectively (F(2,177) = 53, p < 0.0001). Of the 115 Residents who were diagnostically unsuccessful, 33% adhered to their original incorrect hypotheses. Residents verbalised an average of 12 findings (interquartile range: 10-14); 58 ± 17% were correct and the percentage of correct findings was similar in all three groups (p = 0.44). Residents showed substantially decreased diagnostic accuracy when faced with discordant physical findings. The majority of trainees given discordant physical findings rejected their initial hypotheses, but were still diagnostically unsuccessful. These results suggest that overcoming the bias induced by a misleading clinical history may involve two independent steps: rejection of the incorrect initial hypothesis, and selection of the correct diagnosis. Educational strategies focused solely on prompting clinicians to re-examine their hypotheses may be insufficient to reduce diagnostic error. © Blackwell Publishing Ltd 2011.
Integrated Data Analysis for Fusion: A Bayesian Tutorial for Fusion Diagnosticians
NASA Astrophysics Data System (ADS)
Dinklage, Andreas; Dreier, Heiko; Fischer, Rainer; Gori, Silvio; Preuss, Roland; Toussaint, Udo von
2008-03-01
Integrated Data Analysis (IDA) offers a unified way of combining information relevant to fusion experiments. Thereby, IDA meets with typical issues arising in fusion data analysis. In IDA, all information is consistently formulated as probability density functions quantifying uncertainties in the analysis within the Bayesian probability theory. For a single diagnostic, IDA allows the identification of faulty measurements and improvements in the setup. For a set of diagnostics, IDA gives joint error distributions allowing the comparison and integration of different diagnostics results. Validation of physics models can be performed by model comparison techniques. Typical data analysis applications benefit from IDA capabilities of nonlinear error propagation, the inclusion of systematic effects and the comparison of different physics models. Applications range from outlier detection, background discrimination, model assessment and design of diagnostics. In order to cope with next step fusion device requirements, appropriate techniques are explored for fast analysis applications.
Numerical Error Estimation with UQ
NASA Astrophysics Data System (ADS)
Ackmann, Jan; Korn, Peter; Marotzke, Jochem
2014-05-01
Ocean models are still in need of means to quantify model errors, which are inevitably made when running numerical experiments. The total model error can formally be decomposed into two parts, the formulation error and the discretization error. The formulation error arises from the continuous formulation of the model not fully describing the studied physical process. The discretization error arises from having to solve a discretized model instead of the continuously formulated model. Our work on error estimation is concerned with the discretization error. Given a solution of a discretized model, our general problem statement is to find a way to quantify the uncertainties due to discretization in physical quantities of interest (diagnostics), which are frequently used in Geophysical Fluid Dynamics. The approach we use to tackle this problem is called the "Goal Error Ensemble method". The basic idea of the Goal Error Ensemble method is that errors in diagnostics can be translated into a weighted sum of local model errors, which makes it conceptually based on the Dual Weighted Residual method from Computational Fluid Dynamics. In contrast to the Dual Weighted Residual method these local model errors are not considered deterministically but interpreted as local model uncertainty and described stochastically by a random process. The parameters for the random process are tuned with high-resolution near-initial model information. However, the original Goal Error Ensemble method, introduced in [1], was successfully evaluated only in the case of inviscid flows without lateral boundaries in a shallow-water framework and is hence only of limited use in a numerical ocean model. Our work consists in extending the method to bounded, viscous flows in a shallow-water framework. As our numerical model, we use the ICON-Shallow-Water model. In viscous flows our high-resolution information is dependent on the viscosity parameter, making our uncertainty measures viscosity-dependent. We will show that we can choose a sensible parameter by using the Reynolds-number as a criteria. Another topic, we will discuss is the choice of the underlying distribution of the random process. This is especially of importance in the scope of lateral boundaries. We will present resulting error estimates for different height- and velocity-based diagnostics applied to the Munk gyre experiment. References [1] F. RAUSER: Error Estimation in Geophysical Fluid Dynamics through Learning; PhD Thesis, IMPRS-ESM, Hamburg, 2010 [2] F. RAUSER, J. MAROTZKE, P. KORN: Ensemble-type numerical uncertainty quantification from single model integrations; SIAM/ASA Journal on Uncertainty Quantification, submitted
Cosby, Karen S; Zipperer, Lorri; Balik, Barbara
2015-09-01
The patient safety literature is full of exhortations to approach medical error from a system perspective and seek multidisciplinary solutions from groups including clinicians, patients themselves, as well as experts outside the traditional medical domain. The 7th annual International Conference on Diagnostic Error in Medicine sought to attract a multispecialty audience, and attempted to capture some of the conversations by engaging participants in a World Café, a technique used to stimulate discussion and preserve insight gained during the conference. We present the ideas generated in this session, discuss them in the context of psychological safety, and demonstrate the application of this novel technique.
Vavilov, A Iu; Viter, V I
2007-01-01
Mathematical questions of data errors of modern thermometrical models of postmortem cooling of the human body are considered. The main diagnostic areas used for thermometry are analyzed to minimize these data errors. The authors propose practical recommendations to decrease data errors of determination of prescription of death coming.
Errors of logic and scholarship concerning dissociative identity disorder.
Ross, Colin A
2009-01-01
The author reviewed a two-part critique of dissociative identity disorder published in the Canadian Journal of Psychiatry. The two papers contain errors of logic and scholarship. Contrary to the conclusions in the critique, dissociative identity disorder has established diagnostic reliability and concurrent validity, the trauma histories of affected individuals can be corroborated, and the existing prospective treatment outcome literature demonstrates improvement in individuals receiving psychotherapy for the disorder. The available evidence supports the inclusion of dissociative identity disorder in future editions of the Diagnostic and Statistical Manual of Mental Disorders.
Types of diagnostic errors in neurological emergencies in the emergency department.
Dubosh, Nicole M; Edlow, Jonathan A; Lefton, Micah; Pope, Jennifer V
2015-02-01
Neurological emergencies often pose diagnostic challenges for emergency physicians because these patients often present with atypical symptoms and standard imaging tests are imperfect. Misdiagnosis occurs due to a variety of errors. These can be classified as knowledge gaps, cognitive errors, and systems-based errors. The goal of this study was to describe these errors through review of quality assurance (QA) records. This was a retrospective pilot study of patients with neurological emergency diagnoses that were missed or delayed at one urban, tertiary academic emergency department. Cases meeting inclusion criteria were identified through review of QA records. Three emergency physicians independently reviewed each case and determined the type of error that led to the misdiagnosis. Proportions, confidence intervals, and a reliability coefficient were calculated. During the study period, 1168 cases were reviewed. Forty-two cases were found to include a neurological misdiagnosis and twenty-nine were determined to be the result of an error. The distribution of error types was as follows: knowledge gap 45.2% (95% CI 29.2, 62.2), cognitive error 29.0% (95% CI 15.9, 46.8), and systems-based error 25.8% (95% CI 13.5, 43.5). Cerebellar strokes were the most common type of stroke misdiagnosed, accounting for 27.3% of missed strokes. All three error types contributed to the misdiagnosis of neurological emergencies. Misdiagnosis of cerebellar lesions and erroneous radiology resident interpretations of neuroimaging were the most common mistakes. Understanding the types of errors may enable emergency physicians to develop possible solutions and avoid them in the future.
An audit of request forms submitted in a multidisciplinary diagnostic center in Lagos.
Oyedeji, Olufemi Abiola; Ogbenna, Abiola Ann; Iwuala, Sandra Omozehio
2015-01-01
Request forms are important means of communication between physicians and diagnostic service providers. Pre-analytical errors account for over two thirds of errors encountered in diagnostic service provision. The importance of adequate completion of request forms is usually underestimated by physicians which may result in medical errors or delay in instituting appropriate treatment. The aim of this study was to audit the level of completion of request forms presented at a multidisciplinary diagnostic center. A review of all requests forms for investigations which included radiologic, laboratory and cardiac investigations received between July and December 2011 was performed to assess their level of completeness. The data was entered into a spreadsheet and analyzed. Only 1.3% of the 7,841 request forms reviewed were fully completed. Patient's names, the referring physician's name and gender were the most completed information on the forms evaluated with 99.0%, 99.0% and 90.3% completion respectively. Patient's age was provided in 68.0%, request date in 88.2%, and clinical notes/ diagnosis in 65.9% of the requests. Patient's full address was provided in only 5.6% of requests evaluated. This study shows that investigation request forms are inadequately filled by physicians in our environment. Continuous medical education of physicians on the need for adequate completion of request forms is needed.
Awareness of Diagnostic Error among Japanese Residents: a Nationwide Study.
Nishizaki, Yuji; Shinozaki, Tomohiro; Kinoshita, Kensuke; Shimizu, Taro; Tokuda, Yasuharu
2018-04-01
Residents' understanding of diagnostic error may differ between countries. We sought to explore the relationship between diagnostic error knowledge and self-study, clinical knowledge, and experience. Our nationwide study involved postgraduate year 1 and 2 (PGY-1 and -2) Japanese residents. The Diagnostic Error Knowledge Assessment Test (D-KAT) and General Medicine In-Training Examination (GM-ITE) were administered at the end of the 2014 academic year. D-KAT scores were compared with the benchmark scores of US residents. Associations between D-KAT score and gender, PGY, emergency department (ED) rotations per month, mean number of inpatients handled at any given time, and mean daily minutes of self-study were also analyzed, both with and without adjusting for GM-ITE scores. Student's t test was used for comparisons with linear mixed models and structural equation models (SEM) to explore associations with D-KAT or GM-ITE scores. The mean D-KAT score among Japanese PGY-2 residents was significantly lower than that of their US PGY-2 counterparts (6.2 vs. 8.3, p < 0.001). GM-ITE scores correlated with ED rotations (≥6 rotations: 2.14; 0.16-4.13; p = 0.03), inpatient caseloads (5-9 patients: 1.79; 0.82-2.76; p < 0.001), and average daily minutes of self-study (≥91 min: 2.05; 0.56-3.53; p = 0.01). SEM revealed that D-KAT scores were directly associated with GM-ITE scores (ß = 0.37, 95% CI: 0.34-0.41) and indirectly associated with ED rotations (ß = 0.06, 95% CI: 0.02-0.10), inpatient caseload (ß = 0.04, 95% CI: 0.003-0.08), and average daily minutes of study (ß = 0.13, 95% CI: 0.09-0.17). Knowledge regarding diagnostic error among Japanese residents was poor compared with that among US residents. D-KAT scores correlated strongly with GM-ITE scores, and the latter scores were positively associated with a greater number of ED rotations, larger caseload (though only up to 15 patients), and more time spent studying.
External Quality Assessment beyond the analytical phase: an Australian perspective.
Badrick, Tony; Gay, Stephanie; McCaughey, Euan J; Georgiou, Andrew
2017-02-15
External Quality Assessment (EQA) is the verification, on a recurring basis, that laboratory results conform to expectations for the quality required for patient care. It is now widely recognised that both the pre- and post-laboratory phase of testing, termed the diagnostic phases, are a significant source of laboratory errors. These errors have a direct impact on both the effectiveness of the laboratory and patient safety. Despite this, Australian laboratories tend to be focussed on very narrow concepts of EQA, primarily surrounding test accuracy, with little in the way of EQA programs for the diagnostic phases. There is a wide range of possibilities for the development of EQA for the diagnostic phases in Australia, such as the utilisation of scenarios and health informatics. Such programs can also be supported through advances in health information and communications technology, including electronic test ordering and clinical decision support systems. While the development of such programs will require consultation and support from the referring doctors, and their format will need careful construction to ensure that the data collected is de-identified and provides education as well as useful and informative data, we believe that there is high value in the development of such programs. Therefore, it is our opinion that all pathology laboratories should strive to be involved in an EQA program in the diagnostic phases to both monitor the diagnostic process and to identify, learn from and reduce errors and near misses in these phases in a timely fashion.
External Quality Assessment beyond the analytical phase: an Australian perspective
Gay, Stephanie; McCaughey, Euan J.; Georgiou, Andrew
2017-01-01
External Quality Assessment (EQA) is the verification, on a recurring basis, that laboratory results conform to expectations for the quality required for patient care. It is now widely recognised that both the pre- and post-laboratory phase of testing, termed the diagnostic phases, are a significant source of laboratory errors. These errors have a direct impact on both the effectiveness of the laboratory and patient safety. Despite this, Australian laboratories tend to be focussed on very narrow concepts of EQA, primarily surrounding test accuracy, with little in the way of EQA programs for the diagnostic phases. There is a wide range of possibilities for the development of EQA for the diagnostic phases in Australia, such as the utilisation of scenarios and health informatics. Such programs can also be supported through advances in health information and communications technology, including electronic test ordering and clinical decision support systems. While the development of such programs will require consultation and support from the referring doctors, and their format will need careful construction to ensure that the data collected is de-identified and provides education as well as useful and informative data, we believe that there is high value in the development of such programs. Therefore, it is our opinion that all pathology laboratories should strive to be involved in an EQA program in the diagnostic phases to both monitor the diagnostic process and to identify, learn from and reduce errors and near misses in these phases in a timely fashion. PMID:28392728
Flannery, Frank T; Parikh, Parul Divya; Oetgen, William J
2010-01-01
This study describes a large database of closed medical professional liability (MPL) claims involving family physicians in the United States. The purpose of this report is to provide information for practicing family physicians that will be useful in improving the quality of care, thereby reducing the incidence of patient injury and the consequent frequency of MPL claims. The Physician Insurers Association of America (PIAA) established a registry of closed MPL claims in 1985. This registry contains data describing 239,756 closed claims in the United States through 2008. The registry is maintained for educational programs that are designed to improve quality of care and reduce patient injury MPL claims. We summarized this closed claims database. Of 239,756 closed claims, 27,556 (11.5%) involved family physicians. Of these 27,556 closed claims, 8797 (31.9%) resulted in a payment, and the average payment was $164,107. In the entire registry, 29.5% of closed claims were paid, and the average payment was $209,156. The most common allegation among family medicine closed claims was diagnostic error, and the most prevalent diagnosis was acute myocardial infarction, which represented 24.1% of closed claims with diagnostic errors. Diagnostic errors related to patients with breast cancer represented the next most common condition, accounting for 21.3% of closed claims with diagnostic errors. MPL issues are common and are important to all practicing family physicians. Knowledge of the details of liability claims should assist practicing family physicians in improving quality of care, reducing patient injury, and reducing the incidence of MPL claims.
NASA Astrophysics Data System (ADS)
Shahriar, Md Rifat; Borghesani, Pietro; Randall, R. B.; Tan, Andy C. C.
2017-11-01
Demodulation is a necessary step in the field of diagnostics to reveal faults whose signatures appear as an amplitude and/or frequency modulation. The Hilbert transform has conventionally been used for the calculation of the analytic signal required in the demodulation process. However, the carrier and modulation frequencies must meet the conditions set by the Bedrosian identity for the Hilbert transform to be applicable for demodulation. This condition, basically requiring the carrier frequency to be sufficiently higher than the frequency of the modulation harmonics, is usually satisfied in many traditional diagnostic applications (e.g. vibration analysis of gear and bearing faults) due to the order-of-magnitude ratio between the carrier and modulation frequency. However, the diversification of the diagnostic approaches and applications shows cases (e.g. electrical signature analysis-based diagnostics) where the carrier frequency is in close proximity to the modulation frequency, thus challenging the applicability of the Bedrosian theorem. This work presents an analytic study to quantify the error introduced by the Hilbert transform-based demodulation when the Bedrosian identity is not satisfied and proposes a mitigation strategy to combat the error. An experimental study is also carried out to verify the analytical results. The outcome of the error analysis sets a confidence limit on the estimated modulation (both shape and magnitude) achieved through the Hilbert transform-based demodulation in case of violated Bedrosian theorem. However, the proposed mitigation strategy is found effective in combating the demodulation error aroused in this scenario, thus extending applicability of the Hilbert transform-based demodulation.
Weaver, Sallie J; Newman-Toker, David E; Rosen, Michael A
2012-01-01
Missed, delayed, or wrong diagnoses can have a severe impact on patients, providers, and the entire health care system. One mechanism implicated in such diagnostic errors is the deterioration of cognitive diagnostic skills that are used rarely or not at all over a prolonged period of time. Existing evidence regarding maintenance of effective cognitive reasoning skills in the clinical education, organizational training, and human factors literatures suggest that continuing education plays a critical role in mitigating and managing diagnostic skill decay. Recent models also underscore the role of system level factors (eg, cognitive decision support tools, just-in-time training opportunities) in supporting clinical reasoning process. The purpose of this manuscript is to offer a multidisciplinary review of cognitive models of clinical decision making skills in order to provide a list of best practices for supporting continuous improvement and maintenance of cognitive diagnostic processes through continuing education. Copyright © 2012 The Alliance for Continuing Education in the Health Professions, the Society for Academic Continuing Medical Education, and the Council on CME, Association for Hospital Medical Education.
Measures to Improve Diagnostic Safety in Clinical Practice.
Singh, Hardeep; Graber, Mark L; Hofer, Timothy P
2016-10-20
Timely and accurate diagnosis is foundational to good clinical practice and an essential first step to achieving optimal patient outcomes. However, a recent Institute of Medicine report concluded that most of us will experience at least one diagnostic error in our lifetime. The report argues for efforts to improve the reliability of the diagnostic process through better measurement of diagnostic performance. The diagnostic process is a dynamic team-based activity that involves uncertainty, plays out over time, and requires effective communication and collaboration among multiple clinicians, diagnostic services, and the patient. Thus, it poses special challenges for measurement. In this paper, we discuss how the need to develop measures to improve diagnostic performance could move forward at a time when the scientific foundation needed to inform measurement is still evolving. We highlight challenges and opportunities for developing potential measures of "diagnostic safety" related to clinical diagnostic errors and associated preventable diagnostic harm. In doing so, we propose a starter set of measurement concepts for initial consideration that seem reasonably related to diagnostic safety and call for these to be studied and further refined. This would enable safe diagnosis to become an organizational priority and facilitate quality improvement. Health-care systems should consider measurement and evaluation of diagnostic performance as essential to timely and accurate diagnosis and to the reduction of preventable diagnostic harm.This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal.
Doherty, Carolynne M; Forbes, Raeburn B
2014-01-01
Diagnostic Lumbar Puncture is one of the most commonly performed invasive tests in clinical medicine. Evaluation of an acute headache and investigation of inflammatory or infectious disease of the nervous system are the most common indications. Serious complications are rare, and correct technique will minimise diagnostic error and maximise patient comfort. We review the technique of diagnostic Lumbar Puncture including anatomy, needle selection, needle insertion, measurement of opening pressure, Cerebrospinal Fluid (CSF) specimen handling and after care. We also make some quality improvement suggestions for those designing services incorporating diagnostic Lumbar Puncture. PMID:25075138
Diagnostic Hypothesis Generation and Human Judgment
ERIC Educational Resources Information Center
Thomas, Rick P.; Dougherty, Michael R.; Sprenger, Amber M.; Harbison, J. Isaiah
2008-01-01
Diagnostic hypothesis-generation processes are ubiquitous in human reasoning. For example, clinicians generate disease hypotheses to explain symptoms and help guide treatment, auditors generate hypotheses for identifying sources of accounting errors, and laypeople generate hypotheses to explain patterns of information (i.e., data) in the…
Ingram, W Scott; Yang, Jinzhong; Wendt, Richard; Beadle, Beth M; Rao, Arvind; Wang, Xin A; Court, Laurence E
2017-08-01
To assess the influence of non-rigid anatomy and differences in patient positioning between CT acquisition and endoscopic examination on endoscopy-CT image registration in the head and neck. Radiotherapy planning CTs and 31-35 daily treatment-room CTs were acquired for nineteen patients. Diagnostic CTs were acquired for thirteen of the patients. The surfaces of the airways were segmented on all scans and triangular meshes were created to render virtual endoscopic images with a calibrated pinhole model of an endoscope. The virtual images were used to take projective measurements throughout the meshes, with reference measurements defined as those taken on the planning CTs and test measurements defined as those taken on the daily or diagnostic CTs. The influence of non-rigid anatomy was quantified by 3D distance errors between reference and test measurements on the daily CTs, and the influence of patient positioning was quantified by 3D distance errors between reference and test measurements on the diagnostic CTs. The daily CT measurements were also used to investigate the influences of camera-to-surface distance, surface angle, and the interval of time between scans. Average errors in the daily CTs were 0.36 ± 0.61 cm in the nasal cavity, 0.58 ± 0.83 cm in the naso- and oropharynx, and 0.47 ± 0.73 cm in the hypopharynx and larynx. Average errors in the diagnostic CTs in those regions were 0.52 ± 0.69 cm, 0.65 ± 0.84 cm, and 0.69 ± 0.90 cm, respectively. All CTs had errors heavily skewed towards 0, albeit with large outliers. Large camera-to-surface distances were found to increase the errors, but the angle at which the camera viewed the surface had no effect. The errors in the Day 1 and Day 15 CTs were found to be significantly smaller than those in the Day 30 CTs (P < 0.05). Inconsistencies of patient positioning have a larger influence than non-rigid anatomy on projective measurement errors. In general, these errors are largest when the camera is in the superior pharynx, where it sees large distances and a lot of muscle motion. The errors are larger when the interval of time between CT acquisitions is longer, which suggests that the interval of time between the CT acquisition and the endoscopic examination should be kept short. The median errors found in this study are comparable to acceptable levels of uncertainty in deformable CT registration. Large errors are possible even when image alignment is very good, indicating that projective measurements must be made carefully to avoid these outliers. © 2017 American Association of Physicists in Medicine.
Defining and Measuring Diagnostic Uncertainty in Medicine: A Systematic Review.
Bhise, Viraj; Rajan, Suja S; Sittig, Dean F; Morgan, Robert O; Chaudhary, Pooja; Singh, Hardeep
2018-01-01
Physicians routinely encounter diagnostic uncertainty in practice. Despite its impact on health care utilization, costs and error, measurement of diagnostic uncertainty is poorly understood. We conducted a systematic review to describe how diagnostic uncertainty is defined and measured in medical practice. We searched OVID Medline and PsycINFO databases from inception until May 2017 using a combination of keywords and Medical Subject Headings (MeSH). Additional search strategies included manual review of references identified in the primary search, use of a topic-specific database (AHRQ-PSNet) and expert input. We specifically focused on articles that (1) defined diagnostic uncertainty; (2) conceptualized diagnostic uncertainty in terms of its sources, complexity of its attributes or strategies for managing it; or (3) attempted to measure diagnostic uncertainty. We identified 123 articles for full review, none of which defined diagnostic uncertainty. Three attributes of diagnostic uncertainty were relevant for measurement: (1) it is a subjective perception experienced by the clinician; (2) it has the potential to impact diagnostic evaluation-for example, when inappropriately managed, it can lead to diagnostic delays; and (3) it is dynamic in nature, changing with time. Current methods for measuring diagnostic uncertainty in medical practice include: (1) asking clinicians about their perception of uncertainty (surveys and qualitative interviews), (2) evaluating the patient-clinician encounter (such as by reviews of medical records, transcripts of patient-clinician communication and observation), and (3) experimental techniques (patient vignette studies). The term "diagnostic uncertainty" lacks a clear definition, and there is no comprehensive framework for its measurement in medical practice. Based on review findings, we propose that diagnostic uncertainty be defined as a "subjective perception of an inability to provide an accurate explanation of the patient's health problem." Methodological advancements in measuring diagnostic uncertainty can improve our understanding of diagnostic decision-making and inform interventions to reduce diagnostic errors and overuse of health care resources.
2016-01-01
Background It is often thought that random measurement error has a minor effect upon the results of an epidemiological survey. Theoretically, errors of measurement should always increase the spread of a distribution. Defining an illness by having a measurement outside an established healthy range will lead to an inflated prevalence of that condition if there are measurement errors. Methods and results A Monte Carlo simulation was conducted of anthropometric assessment of children with malnutrition. Random errors of increasing magnitude were imposed upon the populations and showed that there was an increase in the standard deviation with each of the errors that became exponentially greater with the magnitude of the error. The potential magnitude of the resulting error of reported prevalence of malnutrition were compared with published international data and found to be of sufficient magnitude to make a number of surveys and the numerous reports and analyses that used these data unreliable. Conclusions The effect of random error in public health surveys and the data upon which diagnostic cut-off points are derived to define “health” has been underestimated. Even quite modest random errors can more than double the reported prevalence of conditions such as malnutrition. Increasing sample size does not address this problem, and may even result in less accurate estimates. More attention needs to be paid to the selection, calibration and maintenance of instruments, measurer selection, training & supervision, routine estimation of the likely magnitude of errors using standardization tests, use of statistical likelihood of error to exclude data from analysis and full reporting of these procedures in order to judge the reliability of survey reports. PMID:28030627
Figure and ground in physician misdiagnosis: metacognition and diagnostic norms.
Hamm, Robert M
2014-01-01
Meta-cognitive awareness, or self reflection informed by the "heuristics and biases" theory of how experts make cognitive errors, has been offered as a partial solution for diagnostic errors in medicine. I argue that this approach is not as easy nor as effective as one might hope. We should also promote mastery of the basic principles of diagnosis in medical school, continuing medical education, and routine reflection and review. While it may seem difficult to attend to both levels simultaneously, there is more to be gained from attending to both than from focusing only on one.
Diagnostics for insufficiencies of posterior calculations in Bayesian signal inference.
Dorn, Sebastian; Oppermann, Niels; Ensslin, Torsten A
2013-11-01
We present an error-diagnostic validation method for posterior distributions in Bayesian signal inference, an advancement of a previous work. It transfers deviations from the correct posterior into characteristic deviations from a uniform distribution of a quantity constructed for this purpose. We show that this method is able to reveal and discriminate several kinds of numerical and approximation errors, as well as their impact on the posterior distribution. For this we present four typical analytical examples of posteriors with incorrect variance, skewness, position of the maximum, or normalization. We show further how this test can be applied to multidimensional signals.
Irregular analytical errors in diagnostic testing - a novel concept.
Vogeser, Michael; Seger, Christoph
2018-02-23
In laboratory medicine, routine periodic analyses for internal and external quality control measurements interpreted by statistical methods are mandatory for batch clearance. Data analysis of these process-oriented measurements allows for insight into random analytical variation and systematic calibration bias over time. However, in such a setting, any individual sample is not under individual quality control. The quality control measurements act only at the batch level. Quantitative or qualitative data derived for many effects and interferences associated with an individual diagnostic sample can compromise any analyte. It is obvious that a process for a quality-control-sample-based approach of quality assurance is not sensitive to such errors. To address the potential causes and nature of such analytical interference in individual samples more systematically, we suggest the introduction of a new term called the irregular (individual) analytical error. Practically, this term can be applied in any analytical assay that is traceable to a reference measurement system. For an individual sample an irregular analytical error is defined as an inaccuracy (which is the deviation from a reference measurement procedure result) of a test result that is so high it cannot be explained by measurement uncertainty of the utilized routine assay operating within the accepted limitations of the associated process quality control measurements. The deviation can be defined as the linear combination of the process measurement uncertainty and the method bias for the reference measurement system. Such errors should be coined irregular analytical errors of the individual sample. The measurement result is compromised either by an irregular effect associated with the individual composition (matrix) of the sample or an individual single sample associated processing error in the analytical process. Currently, the availability of reference measurement procedures is still highly limited, but LC-isotope-dilution mass spectrometry methods are increasingly used for pre-market validation of routine diagnostic assays (these tests also involve substantial sets of clinical validation samples). Based on this definition/terminology, we list recognized causes of irregular analytical error as a risk catalog for clinical chemistry in this article. These issues include reproducible individual analytical errors (e.g. caused by anti-reagent antibodies) and non-reproducible, sporadic errors (e.g. errors due to incorrect pipetting volume due to air bubbles in a sample), which can both lead to inaccurate results and risks for patients.
Finding Useful Questions: On Bayesian Diagnosticity, Probability, Impact, and Information Gain
ERIC Educational Resources Information Center
Nelson, Jonathan D.
2005-01-01
Several norms for how people should assess a question's usefulness have been proposed, notably Bayesian diagnosticity, information gain (mutual information), Kullback-Liebler distance, probability gain (error minimization), and impact (absolute change). Several probabilistic models of previous experiments on categorization, covariation assessment,…
Kluge, Annette; Grauel, Britta; Burkolter, Dina
2013-03-01
Two studies are presented in which the design of a procedural aid and the impact of an additional decision aid for process control were assessed. In Study 1, a procedural aid was developed that avoids imposing unnecessary extraneous cognitive load on novices when controlling a complex technical system. This newly designed procedural aid positively affected germane load, attention, satisfaction, motivation, knowledge acquisition and diagnostic speed for novel faults. In Study 2, the effect of a decision aid for use before the procedural aid was investigated, which was developed based on an analysis of diagnostic errors committed in Study 1. Results showed that novices were able to diagnose both novel faults and practised faults, and were even faster at diagnosing novel faults. This research contributes to the question of how to optimally support novices in dealing with technical faults in process control. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Planetary Transmission Diagnostics
NASA Technical Reports Server (NTRS)
Lewicki, David G. (Technical Monitor); Samuel, Paul D.; Conroy, Joseph K.; Pines, Darryll J.
2004-01-01
This report presents a methodology for detecting and diagnosing gear faults in the planetary stage of a helicopter transmission. This diagnostic technique is based on the constrained adaptive lifting algorithm. The lifting scheme, developed by Wim Sweldens of Bell Labs, is a time domain, prediction-error realization of the wavelet transform that allows for greater flexibility in the construction of wavelet bases. Classic lifting analyzes a given signal using wavelets derived from a single fundamental basis function. A number of researchers have proposed techniques for adding adaptivity to the lifting scheme, allowing the transform to choose from a set of fundamental bases the basis that best fits the signal. This characteristic is desirable for gear diagnostics as it allows the technique to tailor itself to a specific transmission by selecting a set of wavelets that best represent vibration signals obtained while the gearbox is operating under healthy-state conditions. However, constraints on certain basis characteristics are necessary to enhance the detection of local wave-form changes caused by certain types of gear damage. The proposed methodology analyzes individual tooth-mesh waveforms from a healthy-state gearbox vibration signal that was generated using the vibration separation (synchronous signal-averaging) algorithm. Each waveform is separated into analysis domains using zeros of its slope and curvature. The bases selected in each analysis domain are chosen to minimize the prediction error, and constrained to have the same-sign local slope and curvature as the original signal. The resulting set of bases is used to analyze future-state vibration signals and the lifting prediction error is inspected. The constraints allow the transform to effectively adapt to global amplitude changes, yielding small prediction errors. However, local wave-form changes associated with certain types of gear damage are poorly adapted, causing a significant change in the prediction error. The constrained adaptive lifting diagnostic algorithm is validated using data collected from the University of Maryland Transmission Test Rig and the results are discussed.
Diagnostic reasoning: where we've been, where we're going.
Monteiro, Sandra M; Norman, Geoffrey
2013-01-01
Recently, clinical diagnostic reasoning has been characterized by "dual processing" models, which postulate a fast, unconscious (System 1) component and a slow, logical, analytical (System 2) component. However, there are a number of variants of this basic model, which may lead to conflicting claims. This paper critically reviews current theories and evidence about the nature of clinical diagnostic reasoning. We begin by briefly discussing the history of research in clinical reasoning. We then focus more specifically on the evidence to support dual-processing models. We conclude by identifying knowledge gaps about clinical reasoning and provide suggestions for future research. In contrast to work on analytical and nonanalytical knowledge as a basis for reasoning, these theories focus on the thinking process, not the nature of the knowledge retrieved. Ironically, this appears to be a revival of an outdated concept. Rather than defining diagnostic performance by problem-solving skills, it is now being defined by processing strategy. The version of dual processing that has received most attention in the literature in medical diagnosis might be labeled a "default/interventionist" model,(17) which suggests that a default system of cognitive processes (System 1) is responsible for cognitive biases that lead to diagnostic errors and that System 2 intervenes to correct these errors. Consequently, from this model, the best strategy for reducing errors is to make students aware of the biases and to encourage them to rely more on System 2. However, an accumulation of evidence suggests that (a) strategies directed at increasing analytical (System 2) processing, by slowing down, reducing distractions, paying conscious attention, and (b) strategies directed at making students aware of the effect of cognitive biases, have no impact on error rates. Conversely, strategies based on increasing application of relevant knowledge appear to have some success and are consistent with basic research on concept formation.
Diagnostic Testing in Mathematics: An Extension of the PIAT?
ERIC Educational Resources Information Center
Algozzine, Bob; McGraw, Karen
1980-01-01
The article addresses the usefulness of the Peabody Individual Achievement Test (PIAT) in assessing various levels of arithmetic performance. The mathematics subtest of the PIAT is considered in terms of purpose; mathematical abilities subsections (foundations, basic facts, applications); diagnostic testing (the error analysis matrix); and poor…
Validity Arguments for Diagnostic Assessment Using Automated Writing Evaluation
ERIC Educational Resources Information Center
Chapelle, Carol A.; Cotos, Elena; Lee, Jooyoung
2015-01-01
Two examples demonstrate an argument-based approach to validation of diagnostic assessment using automated writing evaluation (AWE). "Criterion"®, was developed by Educational Testing Service to analyze students' papers grammatically, providing sentence-level error feedback. An interpretive argument was developed for its use as part of…
Rekaya, Romdhane; Smith, Shannon; Hay, El Hamidi; Farhat, Nourhene; Aggrey, Samuel E
2016-01-01
Errors in the binary status of some response traits are frequent in human, animal, and plant applications. These error rates tend to differ between cases and controls because diagnostic and screening tests have different sensitivity and specificity. This increases the inaccuracies of classifying individuals into correct groups, giving rise to both false-positive and false-negative cases. The analysis of these noisy binary responses due to misclassification will undoubtedly reduce the statistical power of genome-wide association studies (GWAS). A threshold model that accommodates varying diagnostic errors between cases and controls was investigated. A simulation study was carried out where several binary data sets (case-control) were generated with varying effects for the most influential single nucleotide polymorphisms (SNPs) and different diagnostic error rate for cases and controls. Each simulated data set consisted of 2000 individuals. Ignoring misclassification resulted in biased estimates of true influential SNP effects and inflated estimates for true noninfluential markers. A substantial reduction in bias and increase in accuracy ranging from 12% to 32% was observed when the misclassification procedure was invoked. In fact, the majority of influential SNPs that were not identified using the noisy data were captured using the proposed method. Additionally, truly misclassified binary records were identified with high probability using the proposed method. The superiority of the proposed method was maintained across different simulation parameters (misclassification rates and odds ratios) attesting to its robustness.
Measuring diagnoses: ICD code accuracy.
O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M
2005-10-01
To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Main error sources along the "patient trajectory" include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the "paper trail" include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways.
Assessing Diagnostic Tests I: You Can't Be Too Sensitive.
Jupiter, Daniel C
2015-01-01
Clinicians and patients are always interested in less invasive, cheaper, and faster diagnostic tests. When introducing such a test, physicians must ensure that it is reliable in its diagnoses and does not commit errors. In this article, I discuss several ways that new tests are compared against gold standard diagnostics. Copyright © 2015 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.
Kaplan, Daniel M
2010-10-01
The author argues that the well-formulated problem list is essential for both organizing and evaluating diagnostic thinking. He considers evidence of deficiencies in problem lists in the medical record. He observes a trend among medical trainees toward organizing notes in the medical record according to lists of organ systems or medical subspecialties and hypothesizes that system-based documentation may undermine the art of problem formulation and diagnostic synthesis. Citing research linking more sophisticated problem representation with diagnostic success, he suggests that documentation style and clinical reasoning are closely connected and that organ-based documentation may predispose trainees to several varieties of cognitive diagnostic error and deficient synthesis. These include framing error, premature or absent closure, failure to integrate related findings, and failure to recognize the level of diagnostic resolution attained for a given problem. He acknowledges the pitfalls of higher-order diagnostic resolution, including the application of labels unsupported by firm evidence, while maintaining that diagnostic resolution as far as evidence permits is essential to both rational care of patients and rigorous education of learners. He proposes further research, including comparison of diagnostic efficiency between organ- and problem-oriented thinkers. He hypothesizes that the subspecialty-based structure of academic medical services helps perpetuate organ-system-based thinking, and calls on clinical educators to renew their emphasis on the formulation and documentation of complete and precise problem lists and progressively refined diagnoses by trainees.
Errors of Logic and Scholarship Concerning Dissociative Identity Disorder
ERIC Educational Resources Information Center
Ross, Colin A.
2009-01-01
The author reviewed a two-part critique of dissociative identity disorder published in the "Canadian Journal of Psychiatry". The two papers contain errors of logic and scholarship. Contrary to the conclusions in the critique, dissociative identity disorder has established diagnostic reliability and concurrent validity, the trauma histories of…
Proposed Interventions to Decrease the Frequency of Missed Test Results
ERIC Educational Resources Information Center
Wahls, Terry L.; Cram, Peter
2009-01-01
Numerous studies have identified that delays in diagnosis related to the mishandling of abnormal test results are an import contributor to diagnostic errors. Factors contributing to missed results included organizational factors, provider factors and patient-related factors. At the diagnosis error conference continuing medical education conference…
DIAGNOSTIC STUDY ON FINE PARTICULATE MATTER PREDICTIONS OF CMAQ IN THE SOUTHEASTERN U.S.
In this study, the authors use the process analysis tool embedded in CMAQ to examine major processes that govern the fate of key pollutants, identify the most influential processes that contribute to model errors, and guide the diagnostic and sensitivity studies aimed at improvin...
Intelligent Diagnostic Assistant for Complicated Skin Diseases through C5's Algorithm.
Jeddi, Fatemeh Rangraz; Arabfard, Masoud; Kermany, Zahra Arab
2017-09-01
Intelligent Diagnostic Assistant can be used for complicated diagnosis of skin diseases, which are among the most common causes of disability. The aim of this study was to design and implement a computerized intelligent diagnostic assistant for complicated skin diseases through C5's Algorithm. An applied-developmental study was done in 2015. Knowledge base was developed based on interviews with dermatologists through questionnaires and checklists. Knowledge representation was obtained from the train data in the database using Excel Microsoft Office. Clementine Software and C5's Algorithms were applied to draw the decision tree. Analysis of test accuracy was performed based on rules extracted using inference chains. The rules extracted from the decision tree were entered into the CLIPS programming environment and the intelligent diagnostic assistant was designed then. The rules were defined using forward chaining inference technique and were entered into Clips programming environment as RULE. The accuracy and error rates obtained in the training phase from the decision tree were 99.56% and 0.44%, respectively. The accuracy of the decision tree was 98% and the error was 2% in the test phase. Intelligent diagnostic assistant can be used as a reliable system with high accuracy, sensitivity, specificity, and agreement.
An intelligent advisory system for pre-launch processing
NASA Technical Reports Server (NTRS)
Engrand, Peter A.; Mitchell, Tami
1991-01-01
The shuttle system of interest in this paper is the shuttle's data processing system (DPS). The DPS is composed of the following: (1) general purpose computers (GPC); (2) a multifunction CRT display system (MCDS); (3) mass memory units (MMU); and (4) a multiplexer/demultiplexer (MDM) and related software. In order to ensure the correct functioning of shuttle systems, some level of automatic error detection has been incorporated into all shuttle systems. For the DPS, error detection equipment has been incorporated into all of its subsystems. The automated diagnostic system, (MCDS) diagnostic tool, that aids in a more efficient processing of the DPS is described.
Modeling and Measurement Constraints in Fault Diagnostics for HVAC Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Najafi, Massieh; Auslander, David M.; Bartlett, Peter L.
2010-05-30
Many studies have shown that energy savings of five to fifteen percent are achievable in commercial buildings by detecting and correcting building faults, and optimizing building control systems. However, in spite of good progress in developing tools for determining HVAC diagnostics, methods to detect faults in HVAC systems are still generally undeveloped. Most approaches use numerical filtering or parameter estimation methods to compare data from energy meters and building sensors to predictions from mathematical or statistical models. They are effective when models are relatively accurate and data contain few errors. In this paper, we address the case where models aremore » imperfect and data are variable, uncertain, and can contain error. We apply a Bayesian updating approach that is systematic in managing and accounting for most forms of model and data errors. The proposed method uses both knowledge of first principle modeling and empirical results to analyze the system performance within the boundaries defined by practical constraints. We demonstrate the approach by detecting faults in commercial building air handling units. We find that the limitations that exist in air handling unit diagnostics due to practical constraints can generally be effectively addressed through the proposed approach.« less
Patient safety priorities in mental healthcare in Switzerland: a modified Delphi study.
Mascherek, Anna C; Schwappach, David L B
2016-08-05
Identifying patient safety priorities in mental healthcare is an emerging issue. A variety of aspects of patient safety in medical care apply for patient safety in mental care as well. However, specific aspects may be different as a consequence of special characteristics of patients, setting and treatment. The aim of the present study was to combine knowledge from the field and research and bundle existing initiatives and projects to define patient safety priorities in mental healthcare in Switzerland. The present study draws on national expert panels, namely, round-table discussion and modified Delphi consensus method. As preparation for the modified Delphi questionnaire, two round-table discussions and one semistructured questionnaire were conducted. Preparative work was conducted between May 2015 and October 2015. The modified Delphi was conducted to gauge experts' opinion on priorities in patient safety in mental healthcare in Switzerland. In two independent rating rounds, experts made private ratings. The modified Delphi was conducted in winter 2015. Nine topics were defined along the treatment pathway: diagnostic errors, non-drug treatment errors, medication errors, errors related to coercive measures, errors related to aggression management against self and others, errors in treatment of suicidal patients, communication errors, errors at interfaces of care and structural errors. Patient safety is considered as an important topic of quality in mental healthcare among experts, but it has been seriously neglected up until now. Activities in research and in practice are needed. Structural errors and diagnostics were given highest priority. From the topics identified, some are overlapping with important aspects of patient safety in medical care; however, some core aspects are unique. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Patient safety priorities in mental healthcare in Switzerland: a modified Delphi study
Mascherek, Anna C
2016-01-01
Objective Identifying patient safety priorities in mental healthcare is an emerging issue. A variety of aspects of patient safety in medical care apply for patient safety in mental care as well. However, specific aspects may be different as a consequence of special characteristics of patients, setting and treatment. The aim of the present study was to combine knowledge from the field and research and bundle existing initiatives and projects to define patient safety priorities in mental healthcare in Switzerland. The present study draws on national expert panels, namely, round-table discussion and modified Delphi consensus method. Design As preparation for the modified Delphi questionnaire, two round-table discussions and one semistructured questionnaire were conducted. Preparative work was conducted between May 2015 and October 2015. The modified Delphi was conducted to gauge experts' opinion on priorities in patient safety in mental healthcare in Switzerland. In two independent rating rounds, experts made private ratings. The modified Delphi was conducted in winter 2015. Results Nine topics were defined along the treatment pathway: diagnostic errors, non-drug treatment errors, medication errors, errors related to coercive measures, errors related to aggression management against self and others, errors in treatment of suicidal patients, communication errors, errors at interfaces of care and structural errors. Conclusions Patient safety is considered as an important topic of quality in mental healthcare among experts, but it has been seriously neglected up until now. Activities in research and in practice are needed. Structural errors and diagnostics were given highest priority. From the topics identified, some are overlapping with important aspects of patient safety in medical care; however, some core aspects are unique. PMID:27496233
Rhythmic chaos: irregularities of computer ECG diagnosis.
Wang, Yi-Ting Laureen; Seow, Swee-Chong; Singh, Devinder; Poh, Kian-Keong; Chai, Ping
2017-09-01
Diagnostic errors can occur when physicians rely solely on computer electrocardiogram interpretation. Cardiologists often receive referrals for computer misdiagnoses of atrial fibrillation. Patients may have been inappropriately anticoagulated for pseudo atrial fibrillation. Anticoagulation carries significant risks, and such errors may carry a high cost. Have we become overreliant on machines and technology? In this article, we illustrate three such cases and briefly discuss how we can reduce these errors. Copyright: © Singapore Medical Association.
NASA Astrophysics Data System (ADS)
Fedonin, O. N.; Petreshin, D. I.; Ageenko, A. V.
2018-03-01
In the article, the issue of increasing a CNC lathe accuracy by compensating for the static and dynamic errors of the machine is investigated. An algorithm and a diagnostic system for a CNC machine tool are considered, which allows determining the errors of the machine for their compensation. The results of experimental studies on diagnosing and improving the accuracy of a CNC lathe are presented.
ERIC Educational Resources Information Center
Haley, Katarina L.; Jacks, Adam; Cunningham, Kevin T.
2013-01-01
Purpose: This study was conducted to evaluate the clinical utility of error variability for differentiating between apraxia of speech (AOS) and aphasia with phonemic paraphasia. Method: Participants were 32 individuals with aphasia after left cerebral injury. Diagnostic groups were formed on the basis of operationalized measures of recognized…
ERIC Educational Resources Information Center
Diamond, James J.; McCormick, Janet
1986-01-01
Using item responses from an in-training examination in diagnostic radiology, the application of a strength of association statistic to the general problem of item analysis is illustrated. Criteria for item selection, general issues of reliability, and error of measurement are discussed. (Author/LMO)
A five-year experience with throat cultures.
Shank, J C; Powell, T A
1984-06-01
This study addresses the usefulness of the throat culture in a family practice residency setting and explores the following questions: (1) Do faculty physicians clinically identify streptococcal pharyngitis better than residents? (2) With time, will residents and faculty physicians improve in their diagnostic accuracy? (3) Should the throat culture be used always, selectively, or never? A total of 3,982 throat cultures were obtained over a five-year study period with 16 percent positive for beta-hemolytic streptococci. The results were compared with the physician's clinical diagnosis of either "nonstreptococcal" (category A) or "streptococcal" (category B). Within category A, 363 of 3,023 patients had positive cultures (12 percent clinical diagnostic error rate). Within category B, 665 of 959 patients had negative cultures (69 percent clinical diagnostic error rate). Faculty were significantly better than residents in diagnosing streptococcal pharyngitis, but not in diagnosing nonstreptococcal sore throats. Neither faculty nor residents improved their diagnostic accuracy over time. Regarding age-specific recommendations, the findings support utilizing a throat culture in all children aged 2 to 15 years with sore throat, but in adults only when the physician suspects streptococcal pharyngitis.
Measuring Diagnoses: ICD Code Accuracy
O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M
2005-01-01
Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999
Cognitive balanced model: a conceptual scheme of diagnostic decision making.
Lucchiari, Claudio; Pravettoni, Gabriella
2012-02-01
Diagnostic reasoning is a critical aspect of clinical performance, having a high impact on quality and safety of care. Although diagnosis is fundamental in medicine, we still have a poor understanding of the factors that determine its course. According to traditional understanding, all information used in diagnostic reasoning is objective and logically driven. However, these conditions are not always met. Although we would be less likely to make an inaccurate diagnosis when following rational decision making, as described by normative models, the real diagnostic process works in a different way. Recent work has described the major cognitive biases in medicine as well as a number of strategies for reducing them, collectively called debiasing techniques. However, advances have encountered obstacles in achieving implementation into clinical practice. While traditional understanding of clinical reasoning has failed to consider contextual factors, most debiasing techniques seem to fail in raising sound and safer medical praxis. Technological solutions, being data driven, are fundamental in increasing care safety, but they need to consider human factors. Thus, balanced models, cognitive driven and technology based, are needed in day-to-day applications to actually improve the diagnostic process. The purpose of this article, then, is to provide insight into cognitive influences that have resulted in wrong, delayed or missed diagnosis. Using a cognitive approach, we describe the basis of medical error, with particular emphasis on diagnostic error. We then propose a conceptual scheme of the diagnostic process by the use of fuzzy cognitive maps. © 2011 Blackwell Publishing Ltd.
NASA Astrophysics Data System (ADS)
Güttler, I.
2012-04-01
Systematic errors in near-surface temperature (T2m), total cloud cover (CLD), shortwave albedo (ALB) and surface net longwave (SNL) and shortwave energy flux (SNS) are detected in simulations of RegCM on 50 km resolution over the European CORDEX domain when forced with ERA-Interim reanalysis. Simulated T2m is compared to CRU 3.0 and other variables to GEWEX-SRB 3.0 dataset. Most of systematic errors found in SNL and SNS are consistent with errors in T2m, CLD and ALB: they include prevailing negative errors in T2m and positive errors in CLD present during most of the year. Errors in T2m and CLD can be associated with the overestimation of SNL and SNS in most simulations. Impact of errors in albedo are primarily confined to north Africa, where e.g. underestimation of albedo in JJA is consistent with associated surface heating and positive SNS and T2m errors. Sensitivity to the choice of the PBL scheme and various parameters in PBL schemes is examined from an ensemble of 20 simulations. The recently implemented prognostic PBL scheme performs over Europe with a mixed success when compared to standard diagnostic scheme with a general increase of errors in T2m and CLD over all of the domain. Nevertheless, the improvements in T2m can be found in e.g. north-eastern Europe during DJF and western Europe during JJA where substantial warm biases existed in simulations with the diagnostic scheme. The most detectable impact, in terms of the JJA T2m errors over western Europe, comes form the variation in the formulation of mixing length. In order to reduce the above errors an update of the RegCM albedo values and further work in customizing PBL scheme is suggested.
Uncharted territory: measuring costs of diagnostic errors outside the medical record.
Schwartz, Alan; Weiner, Saul J; Weaver, Frances; Yudkowsky, Rachel; Sharma, Gunjan; Binns-Calvey, Amy; Preyss, Ben; Jordan, Neil
2012-11-01
In a past study using unannounced standardised patients (USPs), substantial rates of diagnostic and treatment errors were documented among internists. Because the authors know the correct disposition of these encounters and obtained the physicians' notes, they can identify necessary treatment that was not provided and unnecessary treatment. They can also discern which errors can be identified exclusively from a review of the medical records. To estimate the avoidable direct costs incurred by physicians making errors in our previous study. In the study, USPs visited 111 internal medicine attending physicians. They presented variants of four previously validated cases that jointly manipulate the presence or absence of contextual and biomedical factors that could lead to errors in management if overlooked. For example, in a patient with worsening asthma symptoms, a complicating biomedical factor was the presence of reflux disease and a complicating contextual factor was inability to afford the currently prescribed inhaler. Costs of missed or unnecessary services were computed using Medicare cost-based reimbursement data. Fourteen practice locations, including two academic clinics, two community-based primary care networks with multiple sites, a core safety net provider, and three Veteran Administration government facilities. Contribution of errors to costs of care. Overall, errors in care resulted in predicted costs of approximately $174,000 across 399 visits, of which only $8745 was discernible from a review of the medical records alone (without knowledge of the correct diagnoses). The median cost of error per visit with an incorrect care plan differed by case and by presentation variant within case. Chart reviews alone underestimate costs of care because they typically reflect appropriate treatment decisions conditional on (potentially erroneous) diagnoses. Important information about patient context is often entirely missing from medical records. Experimental methods, including the use of USPs, reveal the substantial costs of these errors.
Impact of Measurement Error on Synchrophasor Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yilu; Gracia, Jose R.; Ewing, Paul D.
2015-07-01
Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include themore » possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.« less
Presentation of nursing diagnosis content in fundamentals of nursing textbooks.
Mahon, S M; Spies, M A; Aukamp, V; Barrett, J T; Figgins, M J; Meyer, G A; Young, V K
1997-01-01
The technique and rationale for the use of nursing diagnosis generally are introduced early in the undergraduate curriculum. The three purposes of this descriptive study were to describe the general characteristics and presentation of content on nursing diagnosis in fundamentals of nursing textbooks; describe how the content from the theoretical chapter(s) in nursing diagnosis is carried through in the clinical chapters; and describe how content on diagnostic errors is presented. Although most of the textbooks presented content on nursing diagnosis in a similar fashion, the clinical chapters of the books did not follow the same pattern. Content on diagnostic errors was inconsistent. Educators may find this an effective methodology for reviewing textbooks.
Monteiro, Sandra; Norman, Geoff; Sherbino, Jonathan
2018-06-01
There is general consensus that clinical reasoning involves 2 stages: a rapid stage where 1 or more diagnostic hypotheses are advanced and a slower stage where these hypotheses are tested or confirmed. The rapid hypothesis generation stage is considered inaccessible for analysis or observation. Consequently, recent research on clinical reasoning has focused specifically on improving the accuracy of the slower, hypothesis confirmation stage. Three perspectives have developed in this line of research, and each proposes different error reduction strategies for clinical reasoning. This paper considers these 3 perspectives and examines the underlying assumptions. Additionally, this paper reviews the evidence, or lack of, behind each class of error reduction strategies. The first perspective takes an epidemiological stance, appealing to the benefits of incorporating population data and evidence-based medicine in every day clinical reasoning. The second builds on the heuristic and bias research programme, appealing to a special class of dual process reasoning models that theorizes a rapid error prone cognitive process for problem solving with a slower more logical cognitive process capable of correcting those errors. Finally, the third perspective borrows from an exemplar model of categorization that explicitly relates clinical knowledge and experience to diagnostic accuracy. © 2018 John Wiley & Sons, Ltd.
Making a structured psychiatric diagnostic interview faithful to the nomenclature.
Robins, Lee N; Cottler, Linda B
2004-10-15
Psychiatric diagnostic interviews to be used in epidemiologic studies by lay interviewers have, since the 1970s, attempted to operationalize existing psychiatric nomenclatures. How to maximize the chances that they do so successfully has not previously been spelled out. In this article, the authors discuss strategies for each of the seven steps involved in writing, updating, or modifying a diagnostic interview and its supporting materials: 1) writing questions that match the nomenclature's criteria, 2) checking that respondents will be willing and able to answer the questions, 3) choosing a format acceptable to interviewers that maximizes accurate answering and recording of answers, 4) constructing a data entry and cleaning program that highlights errors to be corrected, 5) creating a diagnostic scoring program that matches the nomenclature's algorithms, 6) developing an interviewer training program that maximizes reliability, and 7) computerizing the interview. For each step, the authors discuss how to identify errors, correct them, and validate the revisions. Although operationalization will never be perfect because of ambiguities in the nomenclature, specifying methods for minimizing divergence from the nomenclature is timely as users modify existing interviews and look forward to updating interviews based on the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition, and the International Classification of Diseases, Eleventh Revision.
Measuring quality in anatomic pathology.
Raab, Stephen S; Grzybicki, Dana Marie
2008-06-01
This article focuses mainly on diagnostic accuracy in measuring quality in anatomic pathology, noting that measuring any quality metric is complex and demanding. The authors discuss standardization and its variability within and across areas of care delivery and efforts involving defining and measuring error to achieve pathology quality and patient safety. They propose that data linking error to patient outcome are critical for developing quality improvement initiatives targeting errors that cause patient harm in addition to using methods of root cause analysis, beyond those traditionally used in cytologic-histologic correlation, to assist in the development of error reduction and quality improvement plans.
ERIC Educational Resources Information Center
Tsai, Yea-Ru; Ouyang, Chen-Sen; Chang, Yukon
2016-01-01
The purpose of this study is to propose a diagnostic approach to identify engineering students' English reading comprehension errors. Student data were collected during the process of reading texts of English for science and technology on a web-based cumulative sentence analysis system. For the analysis, the association-rule, data mining technique…
Uncertainties in climate data sets
NASA Technical Reports Server (NTRS)
Mcguirk, James P.
1992-01-01
Climate diagnostics are constructed from either analyzed fields or from observational data sets. Those that have been commonly used are normally considered ground truth. However, in most of these collections, errors and uncertainties exist which are generally ignored due to the consistency of usage over time. Examples of uncertainties and errors are described in NMC and ECMWF analyses and in satellite observational sets-OLR, TOVS, and SMMR. It is suggested that these errors can be large, systematic, and not negligible in climate analysis.
VLSI (Very Large Scale Integrated Circuits) Design with the MacPitts Silicon Compiler.
1985-09-01
the background. If the algorithm is not fully debugged, then issue instead macpitts basename herald so MacPitts diagnostics and Liszt diagnostics both...command interpreter. Upon compilation, however, the following LI!F compiler ( Liszt ) diagnostic results, Error: Non-number to minus nil where the first...language used in the MacPitts source code. The more instructive solution is to write the Franz LISP code to decide if a jumper wire is needed, and if so, to
Model Performance Evaluation and Scenario Analysis ...
This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors. The performance measures include error analysis, coefficient of determination, Nash-Sutcliffe efficiency, and a new weighted rank method. These performance metrics only provide useful information about the overall model performance. Note that MPESA is based on the separation of observed and simulated time series into magnitude and sequence components. The separation of time series into magnitude and sequence components and the reconstruction back to time series provides diagnostic insights to modelers. For example, traditional approaches lack the capability to identify if the source of uncertainty in the simulated data is due to the quality of the input data or the way the analyst adjusted the model parameters. This report presents a suite of model diagnostics that identify if mismatches between observed and simulated data result from magnitude or sequence related errors. MPESA offers graphical and statistical options that allow HSPF users to compare observed and simulated time series and identify the parameter values to adjust or the input data to modify. The scenario analysis part of the too
Toward diagnostic and phenotype markers for genetically transmitted speech delay.
Shriberg, Lawrence D; Lewis, Barbara A; Tomblin, J Bruce; McSweeny, Jane L; Karlsson, Heather B; Scheer, Alison R
2005-08-01
Converging evidence supports the hypothesis that the most common subtype of childhood speech sound disorder (SSD) of currently unknown origin is genetically transmitted. We report the first findings toward a set of diagnostic markers to differentiate this proposed etiological subtype (provisionally termed speech delay-genetic) from other proposed subtypes of SSD of unknown origin. Conversational speech samples from 72 preschool children with speech delay of unknown origin from 3 research centers were selected from an audio archive. Participants differed on the number of biological, nuclear family members (0 or 2+) classified as positive for current and/or prior speech-language disorder. Although participants in the 2 groups were found to have similar speech competence, as indexed by their Percentage of Consonants Correct scores, their speech error patterns differed significantly in 3 ways. Compared with children who may have reduced genetic load for speech delay (no affected nuclear family members), children with possibly higher genetic load (2+ affected members) had (a) a significantly higher proportion of relative omission errors on the Late-8 consonants; (b) a significantly lower proportion of relative distortion errors on these consonants, particularly on the sibilant fricatives /s/, /z/, and //; and (c) a significantly lower proportion of backed /s/ distortions, as assessed by both perceptual and acoustic methods. Machine learning routines identified a 3-part classification rule that included differential weightings of these variables. The classification rule had diagnostic accuracy value of 0.83 (95% confidence limits = 0.74-0.92), with positive and negative likelihood ratios of 9.6 (95% confidence limits = 3.1-29.9) and 0.40 (95% confidence limits = 0.24-0.68), respectively. The diagnostic accuracy findings are viewed as promising. The error pattern for this proposed subtype of SSD is viewed as consistent with the cognitive-linguistic processing deficits that have been reported for genetically transmitted verbal disorders.
NASA Astrophysics Data System (ADS)
Deka, A. J.; Bharathi, P.; Pandya, K.; Bandyopadhyay, M.; Bhuyan, M.; Yadav, R. K.; Tyagi, H.; Gahlaut, A.; Chakraborty, A.
2018-01-01
The Doppler Shift Spectroscopy (DSS) diagnostic is in the conceptual stage to estimate beam divergence, stripping losses, and beam uniformity of the 100 keV hydrogen Diagnostics Neutral Beam of International Thermonuclear Experimental Reactor. This DSS diagnostic is used to measure the above-mentioned parameters with an error of less than 10%. To aid the design calculations and to establish a methodology for estimation of the beam divergence, DSS measurements were carried out on the existing prototype ion source RF Operated Beam Source in India for Negative ion Research. Emissions of the fast-excited neutrals that are generated from the extracted negative ions were collected in the target tank, and the line broadening of these emissions were used for estimating beam divergence. The observed broadening is a convolution of broadenings due to beam divergence, collection optics, voltage ripple, beam focusing, and instrumental broadening. Hence, for estimating the beam divergence from the observed line broadening, a systematic line profile analysis was performed. To minimize the error in the divergence measurements, a study on error propagation in the beam divergence measurements was carried out and the error was estimated. The measurements of beam divergence were done at a constant RF power of 50 kW and a source pressure of 0.6 Pa by varying the extraction voltage from 4 kV to10 kV and the acceleration voltage from 10 kV to 15 kV. These measurements were then compared with the calorimetric divergence, and the results seemed to agree within 10%. A minimum beam divergence of ˜3° was obtained when the source was operated at an extraction voltage of ˜5 kV and at a ˜10 kV acceleration voltage, i.e., at a total applied voltage of 15 kV. This is in agreement with the values reported in experiments carried out on similar sources elsewhere.
Why patients' disruptive behaviours impair diagnostic reasoning: a randomised experiment.
Mamede, Sílvia; Van Gog, Tamara; Schuit, Stephanie C E; Van den Berge, Kees; Van Daele, Paul L A; Bueving, Herman; Van der Zee, Tim; Van den Broek, Walter W; Van Saase, Jan L C M; Schmidt, H G
2017-01-01
Patients who display disruptive behaviours in the clinical encounter (the so-called 'difficult patients') may negatively affect doctors' diagnostic reasoning, thereby causing diagnostic errors. The present study aimed at investigating the mechanisms underlying the negative influence of difficult patients' behaviours on doctors' diagnostic performance. A randomised experiment with 74 internal medicine residents. Doctors diagnosed eight written clinical vignettes that were exactly the same except for the patients' behaviours (either difficult or neutral). Each participant diagnosed half of the vignettes in a difficult patient version and the other half in a neutral version in a counterbalanced design. After diagnosing each vignette, participants were asked to recall the patient's clinical findings and behaviours. Main measurements were: diagnostic accuracy scores; time spent on diagnosis, and amount of information recalled from patients' clinical findings and behaviours. Mean diagnostic accuracy scores (range 0-1) were significantly lower for difficult than neutral patients' vignettes (0.41 vs 0.51; p<0.01). Time spent on diagnosing was similar. Participants recalled fewer clinical findings (mean=29.82% vs mean=32.52%; p<0.001) and more behaviours (mean=25.51% vs mean=17.89%; p<0.001) from difficult than from neutral patients. Difficult patients' behaviours induce doctors to make diagnostic errors, apparently because doctors spend part of their mental resources on dealing with the difficult patients' behaviours, impeding adequate processing of clinical findings. Efforts should be made to increase doctors' awareness of the potential negative influence of difficult patients' behaviours on diagnostic decisions and their ability to counteract such influence. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Challenges in pediatric chronic inflammatory demyelinating polyneuropathy.
Haliloğlu, Göknur; Yüksel, Deniz; Temoçin, Cağri Mesut; Topaloğlu, Haluk
2016-12-01
Chronic inflammatory demyelinating neuropathy, a treatable immune-mediated disease of the peripheral nervous system is less common in childhood compared to adults. Despite different sets of diagnostic criteria, lack of a reliable biologic marker leads to challenges in diagnosis, follow-up and treatment. Our first aim was to review clinical presentation, course, response to treatment, and prognosis in our childhood patients. We also aimed to document diagnostic and therapeutic pitfalls and challenges at the bedside. Our original cohort consisted of 23 pediatric patients who were referred to us with a clinical diagnosis of chronic inflammatory demyelinating neuropathy. Seven patients reaching to an alternative diagnosis were excluded. In the remaining patients, diagnostic, treatment and follow-up data were compared in typical patients who satisfied both clinical and electrodiagnostic criteria and atypical patients who failed to meet minimal research chronic inflammatory demyelinating neuropathy electrodiagnostic requirements. Eight of 16 patients (50%) met the minimal chronic inflammatory demyelinating neuropathy research diagnostic requirements. There was only a statistically significant difference (p = 0.010) in terms of European Neuromuscular Centre childhood chronic inflammatory diagnostic mandatory clinical criteria between the two groups. Misdiagnosis due to errors in electrophysiological interpretation (100%, n = 8), cerebrospinal fluid cytoalbuminologic dissociation (100%, n = 4 and/or subjective improvement on any immunotherapy modality (80 ± 19.27%)) was frequent. Pediatric CIDP is challenging in terms of diagnostic and therapeutic pitfalls at the bedside. Diagnostic errors due to electrophysiological interpretation, cerebrospinal fluid cytoalbuminologic dissociation, and/or subjective improvement on immunotherapy should be considered. Copyright © 2016 Elsevier B.V. All rights reserved.
Webster, Joshua D; Michalowski, Aleksandra M; Dwyer, Jennifer E; Corps, Kara N; Wei, Bih-Rong; Juopperi, Tarja; Hoover, Shelley B; Simpson, R Mark
2012-01-01
The extent to which histopathology pattern recognition image analysis (PRIA) agrees with microscopic assessment has not been established. Thus, a commercial PRIA platform was evaluated in two applications using whole-slide images. Substantial agreement, lacking significant constant or proportional errors, between PRIA and manual morphometric image segmentation was obtained for pulmonary metastatic cancer areas (Passing/Bablok regression). Bland-Altman analysis indicated heteroscedastic measurements and tendency toward increasing variance with increasing tumor burden, but no significant trend in mean bias. The average between-methods percent tumor content difference was -0.64. Analysis of between-methods measurement differences relative to the percent tumor magnitude revealed that method disagreement had an impact primarily in the smallest measurements (tumor burden <3%). Regression-based 95% limits of agreement indicated substantial agreement for method interchangeability. Repeated measures revealed concordance correlation of >0.988, indicating high reproducibility for both methods, yet PRIA reproducibility was superior (C.V.: PRIA = 7.4, manual = 17.1). Evaluation of PRIA on morphologically complex teratomas led to diagnostic agreement with pathologist assessments of pluripotency on subsets of teratomas. Accommodation of the diversity of teratoma histologic features frequently resulted in detrimental trade-offs, increasing PRIA error elsewhere in images. PRIA error was nonrandom and influenced by variations in histomorphology. File-size limitations encountered while training algorithms and consequences of spectral image processing dominance contributed to diagnostic inaccuracies experienced for some teratomas. PRIA appeared better suited for tissues with limited phenotypic diversity. Technical improvements may enhance diagnostic agreement, and consistent pathologist input will benefit further development and application of PRIA.
Raab, Stephen S; Andrew-Jaja, Carey; Condel, Jennifer L; Dabbs, David J
2006-01-01
The objective of the study was to determine whether the Toyota production system process improves Papanicolaou test quality and patient safety. An 8-month nonconcurrent cohort study that included 464 case and 639 control women who had a Papanicolaou test was performed. Office workflow was redesigned using Toyota production system methods by introducing a 1-by-1 continuous flow process. We measured the frequency of Papanicolaou tests without a transformation zone component, follow-up and Bethesda System diagnostic frequency of atypical squamous cells of undetermined significance, and diagnostic error frequency. After the intervention, the percentage of Papanicolaou tests lacking a transformation zone component decreased from 9.9% to 4.7% (P = .001). The percentage of Papanicolaou tests with a diagnosis of atypical squamous cells of undetermined significance decreased from 7.8% to 3.9% (P = .007). The frequency of error per correlating cytologic-histologic specimen pair decreased from 9.52% to 7.84%. The introduction of the Toyota production system process resulted in improved Papanicolaou test quality.
Meaningful Peer Review in Radiology: A Review of Current Practices and Potential Future Directions.
Moriarity, Andrew K; Hawkins, C Matthew; Geis, J Raymond; Dreyer, Keith J; Kamer, Aaron P; Khandheria, Paras; Morey, Jose; Whitfill, James; Wiggins, Richard H; Itri, Jason N
2016-12-01
The current practice of peer review within radiology is well developed and widely implemented compared with other medical specialties. However, there are many factors that limit current peer review practices from reducing diagnostic errors and improving patient care. The development of "meaningful peer review" requires a transition away from compliance toward quality improvement, whereby the information and insights gained facilitate education and drive systematic improvements that reduce the frequency and impact of diagnostic error. The next generation of peer review requires significant improvements in IT functionality and integration, enabling features such as anonymization, adjudication by multiple specialists, categorization and analysis of errors, tracking, feedback, and easy export into teaching files and other media that require strong partnerships with vendors. In this article, the authors assess various peer review practices, with focused discussion on current limitations and future needs for meaningful peer review in radiology. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Determining the Numeracy and Algebra Errors of Students in a Two-Year Vocational School
ERIC Educational Resources Information Center
Akyüz, Gözde
2015-01-01
The goal of this study was to determine the mathematics achievement level in basic numeracy and algebra concepts of students in a two-year program in a technical vocational school of higher education and determine the errors that they make in these topics. The researcher developed a diagnostic mathematics achievement test related to numeracy and…
ERIC Educational Resources Information Center
Nelson, Jonathan D.
2007-01-01
Reports an error in "Finding Useful Questions: On Bayesian Diagnosticity, Probability, Impact, and Information Gain" by Jonathan D. Nelson (Psychological Review, 2005[Oct], Vol 112[4], 979-999). In Table 13, the data should indicate that 7% of females had short hair and 93% of females had long hair. The calculations and discussion in the article…
Modelling and analysis of flux surface mapping experiments on W7-X
NASA Astrophysics Data System (ADS)
Lazerson, Samuel; Otte, Matthias; Bozhenkov, Sergey; Sunn Pedersen, Thomas; Bräuer, Torsten; Gates, David; Neilson, Hutch; W7-X Team
2015-11-01
The measurement and compensation of error fields in W7-X will be key to the device achieving high beta steady state operations. Flux surface mapping utilizes the vacuum magnetic flux surfaces, a feature unique to stellarators and heliotrons, to allow direct measurement of magnetic topology, and thereby allows a highly accurate determination of remnant magnetic field errors. As will be reported separately at this meeting, the first measurements confirming the existence of nested flux surfaces in W7-X have been made. In this presentation, a synthetic diagnostic for the flux surface mapping diagnostic is presented. It utilizes Poincaré traces to construct an image of the flux surface consistent with the measured camera geometry, fluorescent rod sweep plane, and emitter beam position. Forward modeling of the high-iota configuration will be presented demonstrating an ability to measure the intrinsic error field using the U.S. supplied trim coil system on W7-X, and a first experimental assessment of error fields in W7-X will be presented. This work has been authored by Princeton University under Contract Number DE-AC02-09CH11466 with the US Department of Energy.
Farris, Coreen; Viken, Richard J; Treat, Teresa A
2010-01-01
Young men's errors in sexual perception have been linked to sexual coercion. The current investigation sought to explicate the perceptual and decisional sources of these social perception errors, as well as their link to risk for sexual violence. General Recognition Theory (GRT; [Ashby, F. G., & Townsend, J. T. (1986). Varieties of perceptual independence. Psychological Review, 93, 154-179]) was used to estimate participants' ability to discriminate between affective cues and clothing style cues and to measure illusory correlations between men's perception of women's clothing style and sexual interest. High-risk men were less sensitive to the distinction between women's friendly and sexual interest cues relative to other men. In addition, they were more likely to perceive an illusory correlation between women's diagnostic sexual interest cues (e.g., facial affect) and non-diagnostic cues (e.g., provocative clothing), which increases the probability that high-risk men will misperceive friendly women as intending to communicate sexual interest. The results provide information about the degree of risk conferred by individual differences in perceptual processing of women's interest cues, and also illustrate how translational scientists might adapt GRT to examine research questions about individual differences in social perception.
A new dump system design for stray light reduction of Thomson scattering diagnostic system on EAST.
Xiao, Shumei; Zang, Qing; Han, Xiaofeng; Wang, Tengfei; Yu, Jin; Zhao, Junyu
2016-07-01
Thomson scattering (TS) diagnostic is an important diagnostic for measuring electron temperature and density during plasma discharge. However, the measurement of Thomson scattering signal is disturbed by the stray light easily. The stray light sources in the Experimental Advanced Superconducting Tokamak (EAST) TS diagnostic system were analyzed by a simulation model of the diagnostic system, and simulation results show that the dump system is the primary stray light source. Based on the optics theory and the simulation analysis, a novel dump system including an improved beam trap was proposed and installed. The measurement results indicate that the new dump system can reduce more than 60% of the stray light for the diagnostic system, and the influence of stray light on the error of measured density decreases.
Possin, Katherine L; Chester, Serana K; Laluz, Victor; Bostrom, Alan; Rosen, Howard J; Miller, Bruce L; Kramer, Joel H
2012-09-01
On tests of design fluency, an examinee draws as many different designs as possible in a specified time limit while avoiding repetition. The neuroanatomical substrates and diagnostic group differences of design fluency repetition errors and total correct scores were examined in 110 individuals diagnosed with dementia, 53 with mild cognitive impairment (MCI), and 37 neurologically healthy controls. The errors correlated significantly with volumes in the right and left orbitofrontal cortex (OFC), the right and left superior frontal gyrus, the right inferior frontal gyrus, and the right striatum, but did not correlate with volumes in any parietal or temporal lobe regions. Regression analyses indicated that the lateral OFC may be particularly crucial for preventing these errors, even after excluding patients with behavioral variant frontotemporal dementia (bvFTD) from the analysis. Total correct correlated more diffusely with volumes in the right and left frontal and parietal cortex, the right temporal cortex, and the right striatum and thalamus. Patients diagnosed with bvFTD made significantly more repetition errors than patients diagnosed with MCI, Alzheimer's disease, semantic dementia, progressive supranuclear palsy, or corticobasal syndrome. In contrast, total correct design scores did not differentiate the dementia patients. These results highlight the frontal-anatomic specificity of design fluency repetitions. In addition, the results indicate that the propensity to make these errors supports the diagnosis of bvFTD. (JINS, 2012, 18, 1-11).
Kielar, Maciej
2016-01-01
Aim The purpose of the study was to improve the ultrasonographic assessment of the anterior cruciate ligament by an inclusion of a dynamic element. The proposed functional modification aims to restore normal posterior cruciate ligament tension, which is associated with a visible change in the ligament shape. This method reduces the risk of an error resulting from subjectively assessing the shape of the posterior cruciate ligament. It should be also emphasized that the method combined with other ultrasound anterior cruciate ligament assessment techniques helps increase diagnostic accuracy. Methods Ultrasonography is used as an adjunctive technique in the diagnosis of anterior cruciate ligament injury. The paper presents a sonographic technique for the assessment of suspected anterior cruciate ligament insufficiency supplemented by the use of a dynamic examination. This technique can be recommended as an additional procedure in routine ultrasound diagnostics of anterior cruciate ligament injuries. Results Supplementing routine ultrasonography with the dynamic assessment of posterior cruciate ligament shape changes in patients with suspected anterior cruciate ligament injury reduces the risk of subjective errors and increases diagnostic accuracy. This is important especially in cases of minor anterior knee instability and bilateral anterior knee instability. Conclusions An assessment of changes in posterior cruciate ligament using a dynamic ultrasound examination effectively complements routine sonographic diagnostic techniques for anterior cruciate ligament insufficiency. PMID:27679732
The thinking doctor: clinical decision making in contemporary medicine.
Trimble, Michael; Hamilton, Paul
2016-08-01
Diagnostic errors are responsible for a significant number of adverse events. Logical reasoning and good decision-making skills are key factors in reducing such errors, but little emphasis has traditionally been placed on how these thought processes occur, and how errors could be minimised. In this article, we explore key cognitive ideas that underpin clinical decision making and suggest that by employing some simple strategies, physicians might be better able to understand how they make decisions and how the process might be optimised. © 2016 Royal College of Physicians.
Dichroic beamsplitter for high energy laser diagnostics
LaFortune, Kai N [Livermore, CA; Hurd, Randall [Tracy, CA; Fochs, Scott N [Livermore, CA; Rotter, Mark D [San Ramon, CA; Hackel, Lloyd [Livermore, CA
2011-08-30
Wavefront control techniques are provided for the alignment and performance optimization of optical devices. A Shack-Hartmann wavefront sensor can be used to measure the wavefront distortion and a control system generates feedback error signal to optics inside the device to correct the wavefront. The system can be calibrated with a low-average-power probe laser. An optical element is provided to couple the optical device to a diagnostic/control package in a way that optimizes both the output power of the optical device and the coupling of the probe light into the diagnostics.
ERIC Educational Resources Information Center
Guo, Ling-Yu; Schneider, Phyllis
2016-01-01
Purpose: To determine the diagnostic accuracy of the finite verb morphology composite (FVMC), number of errors per C-unit (Errors/CU), and percent grammatical C-units (PGCUs) in differentiating school-aged children with language impairment (LI) and those with typical language development (TL). Method: Participants were 61 six-year-olds (50 TL, 11…
An Application of M[subscript 2] Statistic to Evaluate the Fit of Cognitive Diagnostic Models
ERIC Educational Resources Information Center
Liu, Yanlou; Tian, Wei; Xin, Tao
2016-01-01
The fit of cognitive diagnostic models (CDMs) to response data needs to be evaluated, since CDMs might yield misleading results when they do not fit the data well. Limited-information statistic M[subscript 2] and the associated root mean square error of approximation (RMSEA[subscript 2]) in item factor analysis were extended to evaluate the fit of…
Burke, Marcus G.; Fonck, Raymond J.; Bongard, Michael W.; ...
2016-07-18
This article corrects an error in M.G. Burke et al., 'Multi-point, high-speed passive ion velocity distribution diagnostic on the Pegasus Toroidal Experiment,' Rev. Sci. Instrum. 83, 10D516 (2012) pertaining to ion temperature. The conclusions of this paper are not altered by the revised ion temperature measurements.
Schmidt, H G; Van Gog, Tamara; Schuit, Stephanie Ce; Van den Berge, Kees; Van Daele, Paul L; Bueving, Herman; Van der Zee, Tim; Van den Broek, Walter W; Van Saase, Jan L; Mamede, Sílvia
2017-01-01
Literature suggests that patients who display disruptive behaviours in the consulting room fuel negative emotions in doctors. These emotions, in turn, are said to cause diagnostic errors. Evidence substantiating this claim is however lacking. The purpose of the present experiment was to study the effect of such difficult patients' behaviours on doctors' diagnostic performance. We created six vignettes in which patients were depicted as difficult (displaying distressing behaviours) or neutral. Three clinical cases were deemed to be diagnostically simple and three deemed diagnostically complex. Sixty-three family practice residents were asked to evaluate the vignettes and make the patient's diagnosis quickly and then through deliberate reflection. In addition, amount of time needed to arrive at a diagnosis was measured. Finally, the participants rated the patient's likability. Mean diagnostic accuracy scores (range 0-1) were significantly lower for difficult than for neutral patients (0.54 vs 0.64; p=0.017). Overall diagnostic accuracy was higher for simple than for complex cases. Deliberate reflection upon the case improved initial diagnostic, regardless of case complexity and of patient behaviours (0.60 vs 0.68, p=0.002). Amount of time needed to diagnose the case was similar regardless of the patient's behaviour. Finally, average likability ratings were lower for difficult than for neutral-patient cases. Disruptive behaviours displayed by patients seem to induce doctors to make diagnostic errors. Interestingly, the confrontation with difficult patients does however not cause the doctor to spend less time on such case. Time can therefore not be considered an intermediary between the way the patient is perceived, his or her likability and diagnostic performance. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Boyce, Matthew R; Menya, Diana; Turner, Elizabeth L; Laktabai, Jeremiah; Prudhomme-O'Meara, Wendy
2018-05-18
Malaria rapid diagnostic tests (RDTs) are a simple, point-of-care technology that can improve the diagnosis and subsequent treatment of malaria. They are an increasingly common diagnostic tool, but concerns remain about their use by community health workers (CHWs). These concerns regard the long-term trends relating to infection prevention measures, the interpretation of test results and adherence to treatment protocols. This study assessed whether CHWs maintained their competency at conducting RDTs over a 12-month timeframe, and if this competency varied with specific CHW characteristics. From June to September, 2015, CHWs (n = 271) were trained to conduct RDTs using a 3-day validated curriculum and a baseline assessment was completed. Between June and August, 2016, CHWs (n = 105) were randomly selected and recruited for follow-up assessments using a 20-step checklist that classified steps as relating to safety, accuracy, and treatment; 103 CHWs participated in follow-up assessments. Poisson regressions were used to test for associations between error count data at follow-up and Poisson regression models fit using generalized estimating equations were used to compare data across time-points. At both baseline and follow-up observations, at least 80% of CHWs correctly completed 17 of the 20 steps. CHWs being 50 years of age or older was associated with increased total errors and safety errors at baseline and follow-up. At follow-up, prior experience conducting RDTs was associated with fewer errors. Performance, as it related to the correct completion of all checklist steps and safety steps, did not decline over the 12 months and performance of accuracy steps improved (mean error ratio: 0.51; 95% CI 0.40-0.63). Visual interpretation of RDT results yielded a CHW sensitivity of 92.0% and a specificity of 97.3% when compared to interpretation by the research team. None of the characteristics investigated was found to be significantly associated with RDT interpretation. With training, most CHWs performing RDTs maintain diagnostic testing competency over at least 12 months. CHWs generally perform RDTs safely and accurately interpret results. Younger age and prior experiences with RDTs were associated with better testing performance. Future research should investigate the mode by which CHW characteristics impact RDT procedures.
A new dump system design for stray light reduction of Thomson scattering diagnostic system on EAST
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, Shumei; Zang, Qing, E-mail: zangq@ipp.ac.cn; Han, Xiaofeng
Thomson scattering (TS) diagnostic is an important diagnostic for measuring electron temperature and density during plasma discharge. However, the measurement of Thomson scattering signal is disturbed by the stray light easily. The stray light sources in the Experimental Advanced Superconducting Tokamak (EAST) TS diagnostic system were analyzed by a simulation model of the diagnostic system, and simulation results show that the dump system is the primary stray light source. Based on the optics theory and the simulation analysis, a novel dump system including an improved beam trap was proposed and installed. The measurement results indicate that the new dump systemmore » can reduce more than 60% of the stray light for the diagnostic system, and the influence of stray light on the error of measured density decreases.« less
Noyer, Aurelien L; Esteves, Jorge E; Thomson, Oliver P
2017-01-01
Diagnostic reasoning refers to the cognitive processes by which clinicians formulate diagnoses. Despite the implications for patient safety and professional identity, research on diagnostic reasoning in osteopathy remains largely theoretical. The aim of this study was to investigate the influence of perceived task difficulty on the diagnostic reasoning of students osteopaths. Using a single-blinded, cross sectional study design, sixteen final year pre-registration osteopathy students diagnosed two standardized cases under two context conditions (complex versus control). Context difficulty was manipulated via verbal manipulation and case order was randomized and counterbalanced across subjects to ensure that each case was diagnosed evenly under both conditions (i.e. half of the subjects performed either case A or B first). After diagnosis, participants were presented with items (literal, inferred and filler) designed to represent analytical and non-analytical reasoning. Response time and error rate for each item were measured. A repeated measures analysis of variance (concept type x context) was performed to identify differences across conditions and make inferences on diagnostic reasoning. Participants made significantly more errors when judging literal concepts and took significantly less time to recognize filler concepts in the complex context. No significant difference in ability to judge inferred concepts across contexts was found. Although speculative and preliminary, our findings suggest the perception of complexity led to an increased reliance on analytical reasoning at the detriment of non-analytical reasoning. To reduce the associated cognitive load, osteopathic educational institutions could consider developing the intuitive diagnostic capabilities of pre-registration students. Postgraduate mentorship opportunities could be considered to enhance the diagnostic reasoning of professional osteopaths, particularly recent graduates. Further research exploring the influence of expertise is required to enhance the validity of this study.
Mappin, Bonnie; Cameron, Ewan; Dalrymple, Ursula; Weiss, Daniel J; Bisanzio, Donal; Bhatt, Samir; Gething, Peter W
2015-11-17
Large-scale mapping of Plasmodium falciparum infection prevalence relies on opportunistic assemblies of infection prevalence data arising from thousands of P. falciparum parasite rate (PfPR) surveys conducted worldwide. Variance in these data is driven by both signal, the true underlying pattern of infection prevalence, and a range of factors contributing to 'noise', including sampling error, differing age ranges of subjects and differing parasite detection methods. Whilst the former two noise components have been addressed in previous studies, the effect of different diagnostic methods used to determine PfPR in different studies has not. In particular, the majority of PfPR data are based on positivity rates determined by either microscopy or rapid diagnostic test (RDT), yet these approaches are not equivalent; therefore a method is needed for standardizing RDT and microscopy-based prevalence estimates prior to use in mapping. Twenty-five recent Demographic and Health surveys (DHS) datasets from sub-Saharan Africa provide child diagnostic test results derived using both RDT and microscopy for each individual. These prevalence estimates were aggregated across level one administrative zones and a Bayesian probit regression model fit to the microscopy- versus RDT-derived prevalence relationship. An errors-in-variables approach was employed to account for sampling error in both the dependent and independent variables. In addition to the diagnostic outcome, RDT type, fever status and recent anti-malarial treatment were extracted from the datasets in order to analyse their effect on observed malaria prevalence. A strong non-linear relationship between the microscopy and RDT-derived prevalence was found. The results of regressions stratified by the additional diagnostic variables (RDT type, fever status and recent anti-malarial treatment) indicate that there is a distinct and consistent difference in the relationship when the data are stratified by febrile status and RDT brand. The relationships defined in this research can be applied to RDT-derived PfPR data to effectively convert them to an estimate of the parasite prevalence expected using microscopy (or vice versa), thereby standardizing the dataset and improving the signal-to-noise ratio. Additionally, the results provide insight on the importance of RDT brands, febrile status and recent anti-malarial treatment for explaining inconsistencies between observed prevalence derived from different diagnostics.
Quinn, Gene R; Ranum, Darrell; Song, Ellen; Linets, Margarita; Keohane, Carol; Riah, Heather; Greenberg, Penny
2017-10-01
Diagnostic errors are an underrecognized source of patient harm, and cardiovascular disease can be challenging to diagnose in the ambulatory setting. Although malpractice data can inform diagnostic error reduction efforts, no studies have examined outpatient cardiovascular malpractice cases in depth. A study was conducted to examine the characteristics of outpatient cardiovascular malpractice cases brought against general medicine practitioners. Some 3,407 closed malpractice claims were analyzed in outpatient general medicine from CRICO Strategies' Comparative Benchmarking System database-the largest detailed database of paid and unpaid malpractice in the world-and multivariate models were created to determine the factors that predicted case outcomes. Among the 153 patients in cardiovascular malpractice cases for whom patient comorbidities were coded, the majority (63%) had at least one traditional cardiac risk factor, such as diabetes, tobacco use, or previous cardiovascular disease. Cardiovascular malpractice cases were more likely to involve an allegation of error in diagnosis (75% vs. 47%, p <0.0001), have high clinical severity (86% vs. 49%, p <0.0001) and result in death (75% vs. 27%, p <0.0001), as compared to noncardiovascular cases. Initial diagnoses of nonspecific chest pain and mimics of cardiovascular pain (for example, esophageal disease) were common and independently increased the likelihood of a claim resulting in a payment (p <0.01). Cardiovascular malpractice cases against outpatient general medicine physicians mostly occur in patients with conventional risk factors for coronary artery disease and are often diagnosed with common mimics of cardiovascular pain. These findings suggest that these patients may be high-yield targets for preventing diagnostic errors in the ambulatory setting. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Demidov, V. I.; Koepke, M. E.; Kurlyandskaya, I. P.; Malkov, M. A.
2018-02-01
This paper reviews existing theories for interpreting probe measurements of electron distribution functions (EDF) at high gas pressure when collisions of electrons with atoms and/or molecules near the probe are pervasive. An explanation of whether or not the measurements are realizable and reliable, an enumeration of the most common sources of measurement error, and an outline of proper probe-experiment design elements that inherently limit or avoid error is presented. Additionally, we describe recent expanded plasma-condition compatibility for EDF measurement, including in applications of large wall probe plasma diagnostics. This summary of the authors’ experiences gained over decades of practicing and developing probe diagnostics is intended to inform, guide, suggest, and detail the advantages and disadvantages of probe application in plasma research.
Error field measurement, correction and heat flux balancing on Wendelstein 7-X
Lazerson, Samuel A.; Otte, Matthias; Jakubowski, Marcin; ...
2017-03-10
The measurement and correction of error fields in Wendelstein 7-X (W7-X) is critical to long pulse high beta operation, as small error fields may cause overloading of divertor plates in some configurations. Accordingly, as part of a broad collaborative effort, the detection and correction of error fields on the W7-X experiment has been performed using the trim coil system in conjunction with the flux surface mapping diagnostic and high resolution infrared camera. In the early commissioning phase of the experiment, the trim coils were used to open an n/m = 1/2 island chain in a specially designed magnetic configuration. Themore » flux surfacing mapping diagnostic was then able to directly image the magnetic topology of the experiment, allowing the inference of a small similar to 4 cm intrinsic island chain. The suspected main sources of the error field, slight misalignment and deformations of the superconducting coils, are then confirmed through experimental modeling using the detailed measurements of the coil positions. Observations of the limiters temperatures in module 5 shows a clear dependence of the limiter heat flux pattern as the perturbing fields are rotated. Plasma experiments without applied correcting fields show a significant asymmetry in neutral pressure (centered in module 4) and light emission (visible, H-alpha, CII, and CIII). Such pressure asymmetry is associated with plasma-wall (limiter) interaction asymmetries between the modules. Application of trim coil fields with n = 1 waveform correct the imbalance. Confirmation of the error fields allows the assessment of magnetic fields which resonate with the n/m = 5/5 island chain.« less
Errors in imaging of traumatic injuries.
Scaglione, Mariano; Iaselli, Francesco; Sica, Giacomo; Feragalli, Beatrice; Nicola, Refky
2015-10-01
The advent of multi-detector computed tomography (MDCT) has drastically improved the outcomes of patients with multiple traumatic injuries. However, there are still diagnostic challenges to be considered. A missed or the delay of a diagnosis in trauma patients can sometimes be related to perception or other non-visual cues, while other errors are due to poor technique or poor image quality. In order to avoid any serious complications, it is important for the practicing radiologist to be cognizant of some of the most common types of errors. The objective of this article is to review the various types of errors in the evaluation of patients with multiple trauma injuries or polytrauma with MDCT.
Diagnostics for the detection and evaluation of laser induced damage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheehan, L.; Kozlowski, M.; Rainer, F.
1995-12-31
The Laser Damage and Conditioning Group at LLNL is evaluating diagnostics which will help make damage testing more efficient and reduce the risk of damage during laser conditioning. The work to date has focused on photoacoustic and scattered light measurements on 1064-nm wavelength HfO{sub 2}/SiO{sub 2} multilayer mirror and polarizer coatings. Both the acoustic and scatter diagnostics have resolved 10 {mu}m diameter damage points in these coatings. Using a scanning stage, the scatter diagnostic can map both intrinsic and laser-induced scatter. Damage threshold measurements obtained using scatter diagnostics compare within experimental error with those measured using 100x Nomarski microscopy. Scattermore » signals measured during laser conditioning can be used to detect damage related to nodular defects.« less
Diagnostics for the detection and evaluation of laser induced damage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheehan, L.; Kozlowski, M.; Rainer, F.
1995-01-03
The Laser Damage and Conditioning Group at LLNL is evaluating diagnostics which will help make damage testing more efficient and reduce the risk of damage during laser conditioning. The work to date has focused on photoacoustic and scattered light measurements on 1064-nm wavelength HfO{sub 2}/SiO{sub 2} multilayer mirror and polarizer coatings. Both the acoustic and scatter diagnostics have resolved 10 {mu}m diameter damage points in these coatings. Using a scanning stage, the scatter diagnostic can map both intrinsic and laser-induced scatter. Damage threshold measurements obtained using scatter diagnostics compare within experimental error with those measured using 100x Nomarski microscopy. Scattermore » signals measured during laser conditioning can be used to detect damage related to nodular defects.« less
NASA Astrophysics Data System (ADS)
Duan, Y.; Wilson, A. M.; Barros, A. P.
2014-10-01
A diagnostic analysis of the space-time structure of error in Quantitative Precipitation Estimates (QPE) from the Precipitation Radar (PR) on the Tropical Rainfall Measurement Mission (TRMM) satellite is presented here in preparation for the Integrated Precipitation and Hydrology Experiment (IPHEx) in 2014. IPHEx is the first NASA ground-validation field campaign after the launch of the Global Precipitation Measurement (GPM) satellite. In anticipation of GPM, a science-grade high-density raingauge network was deployed at mid to high elevations in the Southern Appalachian Mountains, USA since 2007. This network allows for direct comparison between ground-based measurements from raingauges and satellite-based QPE (specifically, PR 2A25 V7 using 5 years of data 2008-2013). Case studies were conducted to characterize the vertical profiles of reflectivity and rain rate retrievals associated with large discrepancies with respect to ground measurements. The spatial and temporal distribution of detection errors (false alarm, FA, and missed detection, MD) and magnitude errors (underestimation, UND, and overestimation, OVR) for stratiform and convective precipitation are examined in detail toward elucidating the physical basis of retrieval error. The diagnostic error analysis reveals that detection errors are linked to persistent stratiform light rainfall in the Southern Appalachians, which explains the high occurrence of FAs throughout the year, as well as the diurnal MD maximum at midday in the cold season (fall and winter), and especially in the inner region. Although UND dominates the magnitude error budget, underestimation of heavy rainfall conditions accounts for less than 20% of the total consistent with regional hydrometeorology. The 2A25 V7 product underestimates low level orographic enhancement of rainfall associated with fog, cap clouds and cloud to cloud feeder-seeder interactions over ridges, and overestimates light rainfall in the valleys by large amounts, though this behavior is strongly conditioned by the coarse spatial resolution (5 km) of the terrain topography mask used to remove ground clutter effects. Precipitation associated with small-scale systems (< 25 km2) and isolated deep convection tends to be underestimated, which we attribute to non-uniform beam-filling effects due to spatial averaging of reflectivity at the PR resolution. Mixed precipitation events (i.e., cold fronts and snow showers) fall into OVR or FA categories, but these are also the types of events for which observations from standard ground-based raingauge networks are more likely subject to measurement uncertainty, that is raingauge underestimation errors due to under-catch and precipitation phase. Overall, the space-time structure of the errors shows strong links among precipitation, envelope orography, landform (ridge-valley contrasts), and local hydrometeorological regime that is strongly modulated by the diurnal cycle, pointing to three major error causes that are inter-related: (1) representation of concurrent vertically and horizontally varying microphysics; (2) non uniform beam filling (NUBF) effects and ambiguity in the detection of bright band position; and (3) spatial resolution and ground clutter correction.
NASA Astrophysics Data System (ADS)
Duan, Y.; Wilson, A. M.; Barros, A. P.
2015-03-01
A diagnostic analysis of the space-time structure of error in quantitative precipitation estimates (QPEs) from the precipitation radar (PR) on the Tropical Rainfall Measurement Mission (TRMM) satellite is presented here in preparation for the Integrated Precipitation and Hydrology Experiment (IPHEx) in 2014. IPHEx is the first NASA ground-validation field campaign after the launch of the Global Precipitation Measurement (GPM) satellite. In anticipation of GPM, a science-grade high-density raingauge network was deployed at mid to high elevations in the southern Appalachian Mountains, USA, since 2007. This network allows for direct comparison between ground-based measurements from raingauges and satellite-based QPE (specifically, PR 2A25 Version 7 using 5 years of data 2008-2013). Case studies were conducted to characterize the vertical profiles of reflectivity and rain rate retrievals associated with large discrepancies with respect to ground measurements. The spatial and temporal distribution of detection errors (false alarm, FA; missed detection, MD) and magnitude errors (underestimation, UND; overestimation, OVR) for stratiform and convective precipitation are examined in detail toward elucidating the physical basis of retrieval error. The diagnostic error analysis reveals that detection errors are linked to persistent stratiform light rainfall in the southern Appalachians, which explains the high occurrence of FAs throughout the year, as well as the diurnal MD maximum at midday in the cold season (fall and winter) and especially in the inner region. Although UND dominates the error budget, underestimation of heavy rainfall conditions accounts for less than 20% of the total, consistent with regional hydrometeorology. The 2A25 V7 product underestimates low-level orographic enhancement of rainfall associated with fog, cap clouds and cloud to cloud feeder-seeder interactions over ridges, and overestimates light rainfall in the valleys by large amounts, though this behavior is strongly conditioned by the coarse spatial resolution (5 km) of the topography mask used to remove ground-clutter effects. Precipitation associated with small-scale systems (< 25 km2) and isolated deep convection tends to be underestimated, which we attribute to non-uniform beam-filling effects due to spatial averaging of reflectivity at the PR resolution. Mixed precipitation events (i.e., cold fronts and snow showers) fall into OVR or FA categories, but these are also the types of events for which observations from standard ground-based raingauge networks are more likely subject to measurement uncertainty, that is raingauge underestimation errors due to undercatch and precipitation phase. Overall, the space-time structure of the errors shows strong links among precipitation, envelope orography, landform (ridge-valley contrasts), and a local hydrometeorological regime that is strongly modulated by the diurnal cycle, pointing to three major error causes that are inter-related: (1) representation of concurrent vertically and horizontally varying microphysics; (2) non-uniform beam filling (NUBF) effects and ambiguity in the detection of bright band position; and (3) spatial resolution and ground-clutter correction.
Wu, Mixia; Zhang, Dianchen; Liu, Aiyi
2016-01-01
New biomarkers continue to be developed for the purpose of diagnosis, and their diagnostic performances are typically compared with an existing reference biomarker used for the same purpose. Considerable amounts of research have focused on receiver operating characteristic curves analysis when the reference biomarker is dichotomous. In the situation where the reference biomarker is measured on a continuous scale and dichotomization is not practically appealing, an index was proposed in the literature to measure the accuracy of a continuous biomarker, which is essentially a linear function of the popular Kendall's tau. We consider the issue of estimating such an accuracy index when the continuous reference biomarker is measured with errors. We first investigate the impact of measurement errors on the accuracy index, and then propose methods to correct for the bias due to measurement errors. Simulation results show the effectiveness of the proposed estimator in reducing biases. The methods are exemplified with hemoglobin A1c measurements obtained from both the central lab and a local lab to evaluate the accuracy of the mean data obtained from the metered blood glucose monitoring against the centrally measured hemoglobin A1c from a behavioral intervention study for families of youth with type 1 diabetes.
NASA Astrophysics Data System (ADS)
Elze, Tobias; Baniasadi, Neda; Jin, Qingying; Wang, Hui; Wang, Mengyu
2017-12-01
Retinal nerve fiber layer thickness (RNFLT) measured by optical coherence tomography (OCT) is widely used in clinical practice to support glaucoma diagnosis. Clinicians frequently interpret peripapillary RNFLT areas marked as abnormal by OCT machines. However, presently, clinical OCT machines do not take individual retinal anatomy variation into account, and according diagnostic biases have been shown particularly for patients with ametropia. The angle between the two major temporal retinal arteries (interartery angle, IAA) is considered a fundamental retinal ametropia marker. Here, we analyze peripapillary spectral domain OCT RNFLT scans of 691 glaucoma patients and apply multivariate logistic regression to quantitatively compare the diagnostic bias of spherical equivalent (SE) of refractive error and IAA and to identify the precise retinal locations of false-positive/negative abnormality marks. Independent of glaucoma severity (visual field mean deviation), IAA/SE variations biased abnormality marks on OCT RNFLT printouts at 36.7%/22.9% of the peripapillary area, respectively. 17.2% of the biases due to SE are not explained by IAA variation, particularly in inferonasal areas. To conclude, the inclusion of SE and IAA in OCT RNFLT norms would help to increase diagnostic accuracy. Our detailed location maps may help clinicians to reduce diagnostic bias while interpreting retinal OCT scans.
Schultze, A E; Irizarry, A R
2017-02-01
Veterinary clinical pathologists are well positioned via education and training to assist in investigations of unexpected results or increased variation in clinical pathology data. Errors in testing and unexpected variability in clinical pathology data are sometimes referred to as "laboratory errors." These alterations may occur in the preanalytical, analytical, or postanalytical phases of studies. Most of the errors or variability in clinical pathology data occur in the preanalytical or postanalytical phases. True analytical errors occur within the laboratory and are usually the result of operator or instrument error. Analytical errors are often ≤10% of all errors in diagnostic testing, and the frequency of these types of errors has decreased in the last decade. Analytical errors and increased data variability may result from instrument malfunctions, inability to follow proper procedures, undetected failures in quality control, sample misidentification, and/or test interference. This article (1) illustrates several different types of analytical errors and situations within laboratories that may result in increased variability in data, (2) provides recommendations regarding prevention of testing errors and techniques to control variation, and (3) provides a list of references that describe and advise how to deal with increased data variability.
NASA Astrophysics Data System (ADS)
Porter, J. M.; Jeffries, J. B.; Hanson, R. K.
2009-09-01
A novel three-wavelength mid-infrared laser-based absorption/extinction diagnostic has been developed for simultaneous measurement of temperature and vapor-phase mole fraction in an evaporating hydrocarbon fuel aerosol (vapor and liquid droplets). The measurement technique was demonstrated for an n-decane aerosol with D 50˜3 μ m in steady and shock-heated flows with a measurement bandwidth of 125 kHz. Laser wavelengths were selected from FTIR measurements of the C-H stretching band of vapor and liquid n-decane near 3.4 μm (3000 cm -1), and from modeled light scattering from droplets. Measurements were made for vapor mole fractions below 2.3 percent with errors less than 10 percent, and simultaneous temperature measurements over the range 300 K< T<900 K were made with errors less than 3 percent. The measurement technique is designed to provide accurate values of temperature and vapor mole fraction in evaporating polydispersed aerosols with small mean diameters ( D 50<10 μ m), where near-infrared laser-based scattering corrections are prone to error.
Assimilation of surface NO2 and O3 observations into the SILAM chemistry transport model
NASA Astrophysics Data System (ADS)
Vira, J.; Sofiev, M.
2014-08-01
This paper describes assimilation of trace gas observations into the chemistry transport model SILAM using the 3D-Var method. Assimilation results for year 2012 are presented for the prominent photochemical pollutants ozone (O3) and nitrogen dioxide (NO2). Both species are covered by the Airbase observation database, which provides the observational dataset used in this study. Attention is paid to the background and observation error covariance matrices, which are obtained primarily by iterative application of a posteriori diagnostics. The diagnostics are computed separately for two months representing summer and winter conditions, and further disaggregated by time of day. This allows deriving background and observation error covariance definitions which include both seasonal and diurnal variation. The consistency of the obtained covariance matrices is verified using χ2 diagnostics. The analysis scores are computed for a control set of observation stations withheld from assimilation. Compared to a free-running model simulation, the correlation coefficient for daily maximum values is improved from 0.8 to 0.9 for O3 and from 0.53 to 0.63 for NO2.
Research of Fast DAQ system in KSTAR Thomson scattering diagnostic
NASA Astrophysics Data System (ADS)
Lee, J. H.; Kim, H. J.; Yamada, I.; Funaba, H.; Kim, Y. G.; Kim, D. Y.
2017-12-01
The Thomson scattering diagnostic is one of the most important diagnostic systems in fusion plasma research. It provides reliable electron temperature and density profiles in magnetically confined plasma. A Q-switched Nd:YAG Thomson system was installed several years ago in KSTAR tokamak to measure the electron temperature and density profiles. For the KSTAR Thomson scattering system, a Charge-to-Digital Conversion (QDC) type data acquisition system was used to measure a pulse type Thomson signal. Recently, however, an error was found during the Te, ne calculation, because the QDC system had integrated the pulse Thomson signal that included a signal similar to stray light. To overcome such errors, we introduce a fast data acquisition (F-DAQ) system. To test this, we use CAEN V1742 5 GS/s, a Versa Module Eurocard Bus (VMEbus) type 12-bit switched capacitor digitizer with 32 channels. In this experiment, we compare the calculated Te results of Thomson scattering data measured simultaneously using QDC and F-DAQ. In the F-DAQ system, the shape of the pulse was restored by fitting.
Colour coding for blood collection tube closures - a call for harmonisation.
Simundic, Ana-Maria; Cornes, Michael P; Grankvist, Kjell; Lippi, Giuseppe; Nybo, Mads; Ceriotti, Ferruccio; Theodorsson, Elvar; Panteghini, Mauro
2015-02-01
At least one in 10 patients experience adverse events while receiving hospital care. Many of the errors are related to laboratory diagnostics. Efforts to reduce laboratory errors over recent decades have primarily focused on the measurement process while pre- and post-analytical errors including errors in sampling, reporting and decision-making have received much less attention. Proper sampling and additives to the samples are essential. Tubes and additives are identified not only in writing on the tubes but also by the colour of the tube closures. Unfortunately these colours have not been standardised, running the risk of error when tubes from one manufacturer are replaced by the tubes from another manufacturer that use different colour coding. EFLM therefore supports the worldwide harmonisation of the colour coding for blood collection tube closures and labels in order to reduce the risk of pre-analytical errors and improve the patient safety.
Estimation of diagnostic test accuracy without full verification: a review of latent class methods
Collins, John; Huynh, Minh
2014-01-01
The performance of a diagnostic test is best evaluated against a reference test that is without error. For many diseases, this is not possible, and an imperfect reference test must be used. However, diagnostic accuracy estimates may be biased if inaccurately verified status is used as the truth. Statistical models have been developed to handle this situation by treating disease as a latent variable. In this paper, we conduct a systematized review of statistical methods using latent class models for estimating test accuracy and disease prevalence in the absence of complete verification. PMID:24910172
Development and Evaluation of a Diagnostic Documentation Support System using Knowledge Processing
NASA Astrophysics Data System (ADS)
Makino, Kyoko; Hayakawa, Rumi; Terai, Koichi; Fukatsu, Hiroshi
In this paper, we will introduce a system which supports creating diagnostic reports. Diagnostic reports are documents by doctors of radiology describing the existence and nonexistence of abnormalities from the inspection images, such as CT and MRI, and summarize a patient's state and disease. Our system indicates insufficiencies in these reports created by younger doctors, by using knowledge processing based on a medical knowledge dictionary. These indications are not only clerical errors, but the system also analyzes the purpose of the inspection and determines whether a comparison with a former inspection is required, or whether there is any shortage in description. We verified our system by using actual data of 2,233 report pairs, a pair comprised of a report written by a younger doctor and a check result of the report by an experienced doctor. The results of the verification showed that the rules of string analysis for detecting clerical errors and sentence wordiness obtained a recall of over 90% and a precision of over 75%. Moreover, the rules based on a medical knowledge dictionary for detecting the lack of required comparison with a former inspection and the shortage in description for the inspection purpose obtained a recall of over 70%. From these results, we confirmed that our system contributes to the quality improvement of diagnostic reports. We expect that our system can comprehensively support diagnostic documentations by cooperating with the interface which refers to inspection images or past reports.
Impact of lossy compression on diagnostic accuracy of radiographs for periapical lesions
NASA Technical Reports Server (NTRS)
Eraso, Francisco E.; Analoui, Mostafa; Watson, Andrew B.; Rebeschini, Regina
2002-01-01
OBJECTIVES: The purpose of this study was to evaluate the lossy Joint Photographic Experts Group compression for endodontic pretreatment digital radiographs. STUDY DESIGN: Fifty clinical charge-coupled device-based, digital radiographs depicting periapical areas were selected. Each image was compressed at 2, 4, 8, 16, 32, 48, and 64 compression ratios. One root per image was marked for examination. Images were randomized and viewed by four clinical observers under standardized viewing conditions. Each observer read the image set three times, with at least two weeks between each reading. Three pre-selected sites per image (mesial, distal, apical) were scored on a five-scale score confidence scale. A panel of three examiners scored the uncompressed images, with a consensus score for each site. The consensus score was used as the baseline for assessing the impact of lossy compression on the diagnostic values of images. The mean absolute error between consensus and observer scores was computed for each observer, site, and reading session. RESULTS: Balanced one-way analysis of variance for all observers indicated that for compression ratios 48 and 64, there was significant difference between mean absolute error of uncompressed and compressed images (P <.05). After converting the five-scale score to two-level diagnostic values, the diagnostic accuracy was strongly correlated (R (2) = 0.91) with the compression ratio. CONCLUSION: The results of this study suggest that high compression ratios can have a severe impact on the diagnostic quality of the digital radiographs for detection of periapical lesions.
NASA Astrophysics Data System (ADS)
Chen, Po-Hao; Botzolakis, Emmanuel; Mohan, Suyash; Bryan, R. N.; Cook, Tessa
2016-03-01
In radiology, diagnostic errors occur either through the failure of detection or incorrect interpretation. Errors are estimated to occur in 30-35% of all exams and contribute to 40-54% of medical malpractice litigations. In this work, we focus on reducing incorrect interpretation of known imaging features. Existing literature categorizes cognitive bias leading a radiologist to an incorrect diagnosis despite having correctly recognized the abnormal imaging features: anchoring bias, framing effect, availability bias, and premature closure. Computational methods make a unique contribution, as they do not exhibit the same cognitive biases as a human. Bayesian networks formalize the diagnostic process. They modify pre-test diagnostic probabilities using clinical and imaging features, arriving at a post-test probability for each possible diagnosis. To translate Bayesian networks to clinical practice, we implemented an entirely web-based open-source software tool. In this tool, the radiologist first selects a network of choice (e.g. basal ganglia). Then, large, clearly labeled buttons displaying salient imaging features are displayed on the screen serving both as a checklist and for input. As the radiologist inputs the value of an extracted imaging feature, the conditional probabilities of each possible diagnosis are updated. The software presents its level of diagnostic discrimination using a Pareto distribution chart, updated with each additional imaging feature. Active collaboration with the clinical radiologist is a feasible approach to software design and leads to design decisions closely coupling the complex mathematics of conditional probability in Bayesian networks with practice.
Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial
The model performance evaluation consists of metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors.
Shaw, M; Singh, S
2015-04-01
Diagnostic error has implications for both clinical outcome and resource utilisation, and may often be traced to impaired data gathering, processing or synthesis because of the influence of cognitive bias. Factors inherent to the intensive/acute care environment afford multiple additional opportunities for such errors to occur. This article illustrates many of these with reference to a case encountered on our intensive care unit. Strategies to improve completeness of data gathering, processing and synthesis in the acute care environment are critically appraised in the context of early detection and amelioration of cognitive bias. These include reflection, targeted simulation training and the integration of social media and IT based aids in complex diagnostic processes. A framework which can be quickly and easily employed in a variety of clinical environments is then presented. © 2015 John Wiley & Sons Ltd.
The real malady of Marcel Proust and what it reveals about diagnostic errors in medicine.
Douglas, Yellowlees
2016-05-01
Marcel Proust, author of À La Recherche du Temps Perdu, was considered a hypochondriac not only by the numerous specialists he consulted during his lifetime but also by every literary critic who ventured an opinion on his health, among them several clinicians. However, Proust's voluminous correspondence, as detailed in its attention to his every symptom as his novel, provides valuable clues to Proust's real, organic, and rare illness. Proust, in fact, was not only genuinely ill but far sicker than he even he believed, most likely suffering from the vascular subtype of Ehlers-Danlos Syndrome. Ironically, Proust's own doctors and his clinician-critics replicated the same kinds of diagnostic errors clinicians still routinely make today, shedding light on the plight of patients with rare illnesses. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Bogachkov, I. V.; Lutchenko, S. S.
2018-05-01
The article deals with the method for the assessment of the fiber optic communication lines (FOCL) reliability taking into account the effect of the optical fiber tension, the temperature influence and the built-in diagnostic equipment errors of the first kind. The reliability is assessed in terms of the availability factor using the theory of Markov chains and probabilistic mathematical modeling. To obtain a mathematical model, the following steps are performed: the FOCL state is defined and validated; the state graph and system transitions are described; the system transition of states that occur at a certain point is specified; the real and the observed time of system presence in the considered states are identified. According to the permissible value of the availability factor, it is possible to determine the limiting frequency of FOCL maintenance.
Imperfect Gold Standards for Kidney Injury Biomarker Evaluation
Betensky, Rebecca A.; Emerson, Sarah C.; Bonventre, Joseph V.
2012-01-01
Clinicians have used serum creatinine in diagnostic testing for acute kidney injury for decades, despite its imperfect sensitivity and specificity. Novel tubular injury biomarkers may revolutionize the diagnosis of acute kidney injury; however, even if a novel tubular injury biomarker is 100% sensitive and 100% specific, it may appear inaccurate when using serum creatinine as the gold standard. Acute kidney injury, as defined by serum creatinine, may not reflect tubular injury, and the absence of changes in serum creatinine does not assure the absence of tubular injury. In general, the apparent diagnostic performance of a biomarker depends not only on its ability to detect injury, but also on disease prevalence and the sensitivity and specificity of the imperfect gold standard. Assuming that, at a certain cutoff value, serum creatinine is 80% sensitive and 90% specific and disease prevalence is 10%, a new perfect biomarker with a true 100% sensitivity may seem to have only 47% sensitivity compared with serum creatinine as the gold standard. Minimizing misclassification by using more strict criteria to diagnose acute kidney injury will reduce the error when evaluating the performance of a biomarker under investigation. Apparent diagnostic errors using a new biomarker may be a reflection of errors in the imperfect gold standard itself, rather than poor performance of the biomarker. The results of this study suggest that small changes in serum creatinine alone should not be used to define acute kidney injury in biomarker or interventional studies. PMID:22021710
BREAST: a novel method to improve the diagnostic efficacy of mammography
NASA Astrophysics Data System (ADS)
Brennan, P. C.; Tapia, K.; Ryan, J.; Lee, W.
2013-03-01
High quality breast imaging and accurate image assessment are critical to the early diagnoses, treatment and management of women with breast cancer. Breast Screen Reader Assessment Strategy (BREAST) provides a platform, accessible by researchers and clinicians world-wide, which will contain image data bases, algorithms to assess reader performance and on-line systems for image evaluation. The platform will contribute to the diagnostic efficacy of breast imaging in Australia and beyond on two fronts: reducing errors in mammography, and transforming our assessment of novel technologies and techniques. Mammography is the primary diagnostic tool for detecting breast cancer with over 800,000 women X-rayed each year in Australia, however, it fails to detect 30% of breast cancers with a number of missed cancers being visible on the image [1-6]. BREAST will monitor the mistakes, identify reasons for mammographic errors, and facilitate innovative solutions to reduce error rates. The BREAST platform has the potential to enable expert assessment of breast imaging innovations, anywhere in the world where experts or innovations are located. Currently, innovations are often being assessed by limited numbers of individuals who happen to be geographically located close to the innovation, resulting in equivocal studies with low statistical power. BREAST will transform this current paradigm by enabling large numbers of experts to assess any new method or technology using our embedded evaluation methods. We are confident that this world-first system will play an important part in the future efficacy of breast imaging.
Quality Aware Compression of Electrocardiogram Using Principal Component Analysis.
Gupta, Rajarshi
2016-05-01
Electrocardiogram (ECG) compression finds wide application in various patient monitoring purposes. Quality control in ECG compression ensures reconstruction quality and its clinical acceptance for diagnostic decision making. In this paper, a quality aware compression method of single lead ECG is described using principal component analysis (PCA). After pre-processing, beat extraction and PCA decomposition, two independent quality criteria, namely, bit rate control (BRC) or error control (EC) criteria were set to select optimal principal components, eigenvectors and their quantization level to achieve desired bit rate or error measure. The selected principal components and eigenvectors were finally compressed using a modified delta and Huffman encoder. The algorithms were validated with 32 sets of MIT Arrhythmia data and 60 normal and 30 sets of diagnostic ECG data from PTB Diagnostic ECG data ptbdb, all at 1 kHz sampling. For BRC with a CR threshold of 40, an average Compression Ratio (CR), percentage root mean squared difference normalized (PRDN) and maximum absolute error (MAE) of 50.74, 16.22 and 0.243 mV respectively were obtained. For EC with an upper limit of 5 % PRDN and 0.1 mV MAE, the average CR, PRDN and MAE of 9.48, 4.13 and 0.049 mV respectively were obtained. For mitdb data 117, the reconstruction quality could be preserved up to CR of 68.96 by extending the BRC threshold. The proposed method yields better results than recently published works on quality controlled ECG compression.
Payne, Velma L; Medvedeva, Olga; Legowski, Elizabeth; Castine, Melissa; Tseytlin, Eugene; Jukic, Drazen; Crowley, Rebecca S
2009-11-01
Determine effects of a limited-enforcement intelligent tutoring system in dermatopathology on student errors, goals and solution paths. Determine if limited enforcement in a medical tutoring system inhibits students from learning the optimal and most efficient solution path. Describe the type of deviations from the optimal solution path that occur during tutoring, and how these deviations change over time. Determine if the size of the problem-space (domain scope), has an effect on learning gains when using a tutor with limited enforcement. Analyzed data mined from 44 pathology residents using SlideTutor-a Medical Intelligent Tutoring System in Dermatopathology that teaches histopathologic diagnosis and reporting skills based on commonly used diagnostic algorithms. Two subdomains were included in the study representing sub-algorithms of different sizes and complexities. Effects of the tutoring system on student errors, goal states and solution paths were determined. Students gradually increase the frequency of steps that match the tutoring system's expectation of expert performance. Frequency of errors gradually declines in all categories of error significance. Student performance frequently differs from the tutor-defined optimal path. However, as students continue to be tutored, they approach the optimal solution path. Performance in both subdomains was similar for both errors and goal differences. However, the rate at which students progress toward the optimal solution path differs between the two domains. Tutoring in superficial perivascular dermatitis, the larger and more complex domain was associated with a slower rate of approximation towards the optimal solution path. Students benefit from a limited-enforcement tutoring system that leverages diagnostic algorithms but does not prevent alternative strategies. Even with limited enforcement, students converge toward the optimal solution path.
Azin, Arash; Saleh, Fady; Cleghorn, Michelle; Yuen, Andrew; Jackson, Timothy; Okrainec, Allan; Quereshy, Fayez A
2017-03-01
Colonoscopy for colorectal cancer (CRC) has a localization error rate as high as 21 %. Such errors can have substantial clinical consequences, particularly in laparoscopic surgery. The primary objective of this study was to compare accuracy of tumor localization at initial endoscopy performed by either the operating surgeon or non-operating referring endoscopist. All patients who underwent surgical resection for CRC at a large tertiary academic hospital between January 2006 and August 2014 were identified. The exposure of interest was the initial endoscopist: (1) surgeon who also performed the definitive operation (operating surgeon group); and (2) referring gastroenterologist or general surgeon (referring endoscopist group). The outcome measure was localization error, defined as a difference in at least one anatomic segment between initial endoscopy and final operative location. Multivariate logistic regression was used to explore the association between localization error rate and the initial endoscopist. A total of 557 patients were included in the study; 81 patients in the operating surgeon cohort and 476 patients in the referring endoscopist cohort. Initial diagnostic colonoscopy performed by the operating surgeon compared to referring endoscopist demonstrated statistically significant lower intraoperative localization error rate (1.2 vs. 9.0 %, P = 0.016); shorter mean time from endoscopy to surgery (52.3 vs. 76.4 days, P = 0.015); higher tattoo localization rate (32.1 vs. 21.0 %, P = 0.027); and lower preoperative repeat endoscopy rate (8.6 vs. 40.8 %, P < 0.001). Initial endoscopy performed by the operating surgeon was protective against localization error on both univariate analysis, OR 7.94 (95 % CI 1.08-58.52; P = 0.016), and multivariate analysis, OR 7.97 (95 % CI 1.07-59.38; P = 0.043). This study demonstrates that diagnostic colonoscopies performed by an operating surgeon are independently associated with a lower localization error rate. Further research exploring the factors influencing localization accuracy and why operating surgeons have lower error rates relative to non-operating endoscopists is necessary to understand differences in care.
G Pitman, Alexander
2017-06-01
Referral to a clinical radiologist is the prime means of communication between the referrer and the radiologist. Current Australian and New Zealand government regulations do not prescribe what clinical information should be included in a referral. This work presents a qualitative compilation of clinical radiologist opinion, relevant professional recommendations, governmental regulatory positions and prior work on diagnostic error to synthesise recommendations on what clinical information should be included in a referral. Recommended requirements on what clinical information should be included in a referral to a clinical radiologist are as follows: an unambiguous referral; identity of the patient; identity of the referrer; and sufficient clinical detail to justify performance of the diagnostic imaging examination and to confirm appropriate choice of the examination and modality. Recommended guideline on the content of clinical detail clarifies when the information provided in a referral meets these requirements. High-quality information provided in a referral allows the clinical radiologist to ensure that exposure of patients to medical radiation is justified. It also minimises the incidence of perceptual and interpretational diagnostic error. Recommended requirements and guideline on the clinical detail to be provided in a referral to a clinical radiologist have been formulated for professional debate and adoption. © 2017 The Royal Australian and New Zealand College of Radiologists.
Intelligent monitoring of critical pathological events during anesthesia.
Gohil, Bhupendra; Gholamhhosseini, Hamid; Harrison, Michael J; Lowe, Andrew; Al-Jumaily, Ahmed
2007-01-01
Expert algorithms in the field of intelligent patient monitoring have rapidly revolutionized patient care thereby improving patient safety. Patient monitoring during anesthesia requires cautious attention by anesthetists who are monitoring many modalities, diagnosing clinically critical events and performing patient management tasks simultaneously. The mishaps that occur during day-to-day anesthesia causing disastrous errors in anesthesia administration were classified and studied by Reason [1]. Human errors in anesthesia account for 82% of the preventable mishaps [2]. The aim of this paper is to develop a clinically useful diagnostic alarm system for detecting critical events during anesthesia administration. The development of an expert diagnostic alarm system called ;RT-SAAM' for detecting critical pathological events in the operating theatre is presented. This system provides decision support to the anesthetist by presenting the diagnostic results on an integrative, ergonomic display and thus enhancing patient safety. The performance of the system was validated through a series of offline and real-time testing in the operation theatre. When detecting absolute hypovolaemia (AHV), moderate level of agreement was observed between RT-SAAM and the human expert (anesthetist) during surgical procedures. RT-SAAM is a clinically useful diagnostic tool which can be easily modified for diagnosing additional critical pathological events like relative hypovolaemia, fall in cardiac output, sympathetic response and malignant hyperpyrexia during surgical procedures. RT-SAAM is currently being tested at the Auckland City Hospital with ethical approval from the local ethics committees.
The most common mistakes on dermatoscopy of melanocytic lesions
Kamińska-Winciorek, Grażyna
2015-01-01
Dermatoscopy is a method of in vivo evaluation of the structures within the epidermis and dermis. Currently, it may be the most precise pre-surgical method of diagnosing melanocytic lesions. Diagnostic errors may result in unnecessary removal of benign lesions or what is even worse, they can cause early and very early melanomas to be overlooked. Errors in assessment of dermatoscopy can be divided into those arising from failure to maintain proper test procedures (procedural and technical errors) and knowledge based mistakes related to the lack of sufficient familiarity and experience in dermatoscopy. The article discusses the most common mistakes made by beginner or inexperienced dermatoscopists. PMID:25821425
Radiologic Errors in Patients With Lung Cancer
Forrest, John V.; Friedman, Paul J.
1981-01-01
Some 20 percent to 50 percent of detectable malignant lesions are missed or misdiagnosed at the time of their first radiologic appearance. These errors can result in delayed diagnosis and treatment, which may affect a patient's survival. Use of moderately high (130 to 150) kilovolt peak films, awareness of portions of the lung where lesions are often missed (such as lung apices and paramediastinal and hilar areas), careful comparison of current roentgenograms with those taken previously and the use of an independent second observer can help to minimize the rate of radiologic diagnostic errors in patients with lung cancer. ImagesFigure 3.Figure 4. PMID:7257363
Benign phyllodes tumor with tubular adenoma-like epithelial component in FNAC: A diagnostic pitfall.
Panda, Kishori M
2016-01-01
Benign phyllodes tumor (BPT) is a biphasic neoplasm composed of bland stromal and epithelial elements. Cytologic diagnostic criteria of BPT, though documented in the literature, diagnostic pitfalls in fine-needle aspiration cytology (FNAC) may occur due to sampling error, high cellularity, ductal hyperplasia, paucity of stromal component, and occasional dissociation of epithelial cells. Here, we describe a case of BPT diagnosed by histology in a 19-year-old female, where FNAC features were inconclusive due to paucity of stromal component, predominance of tubular adenoma-like epithelial component, and due to the presence of other overlapping features with fibroadenoma.
Multiparameter measurement utilizing poloidal polarimeter for burning plasma reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Imazawa, Ryota; Kawano, Yasunori; Itami, Kiyoshi
2014-08-21
The authors have made the basic and applied research on the polarimeter for plasma diagnostics. Recently, the authors have proposed an application of multiparameter measurement (magnetic field, B, electron density, n{sub e}, electron temperature, T{sub e}, and total plasma current, I{sub p}) utilizing polarimeter to future fusion reactors. In this proceedings, the brief review of the polarimeter, the principle of the multiparameter measurement and the progress of the research on the multiparameter measurement are explained. The measurement method that the authors have proposed is suitable for the reactor for the following reasons; multiparameters can be obtained from a small numbermore » of diagnostics, the proposed method does not depend on time-history, and far-infrared light utilized by the polarimeter is less sensitive to degradation of of optical components. Taking into account the measuring error, performance assessment of the proposed method was carried. Assuming that the error of Δθ and Δε were 0.1° and 0.6°, respectively, the error of reconstructed j{sub φ}, n{sub e} and T{sub e} were 12 %, 8.4 % and 31 %, respectively. This study has shown that the reconstruction error can be decreased by increasing the number of the wavelength of the probing laser and by increasing the number of the viewing chords. For example, By increasing the number of viewing chords to forty-five, the error of j{sub φ}, n{sub e} and T{sub e} were reduced to 4.4 %, 4.4 %, and 17 %, respectively.« less
NASA Astrophysics Data System (ADS)
Johnson, Traci L.; Sharon, Keren
2016-11-01
Until now, systematic errors in strong gravitational lens modeling have been acknowledged but have never been fully quantified. Here, we launch an investigation into the systematics induced by constraint selection. We model the simulated cluster Ares 362 times using random selections of image systems with and without spectroscopic redshifts and quantify the systematics using several diagnostics: image predictability, accuracy of model-predicted redshifts, enclosed mass, and magnification. We find that for models with >15 image systems, the image plane rms does not decrease significantly when more systems are added; however, the rms values quoted in the literature may be misleading as to the ability of a model to predict new multiple images. The mass is well constrained near the Einstein radius in all cases, and systematic error drops to <2% for models using >10 image systems. Magnification errors are smallest along the straight portions of the critical curve, and the value of the magnification is systematically lower near curved portions. For >15 systems, the systematic error on magnification is ∼2%. We report no trend in magnification error with the fraction of spectroscopic image systems when selecting constraints at random; however, when using the same selection of constraints, increasing this fraction up to ∼0.5 will increase model accuracy. The results suggest that the selection of constraints, rather than quantity alone, determines the accuracy of the magnification. We note that spectroscopic follow-up of at least a few image systems is crucial because models without any spectroscopic redshifts are inaccurate across all of our diagnostics.
Bonert, Michael; El-Shinnawy, Ihab; Carvalho, Michael; Williams, Phillip; Salama, Samih; Tang, Damu; Kapoor, Anil
2017-01-01
Observational data and funnel plots are routinely used outside of pathology to understand trends and improve performance. Extract diagnostic rate (DR) information from free text surgical pathology reports with synoptic elements and assess whether inter-rater variation and clinical history completeness information useful for continuous quality improvement (CQI) can be obtained. All in-house prostate biopsies in a 6-year period at two large teaching hospitals were extracted and then diagnostically categorized using string matching, fuzzy string matching, and hierarchical pruning. DRs were then stratified by the submitting physicians and pathologists. Funnel plots were created to assess for diagnostic bias. 3,854 prostate biopsies were found and all could be diagnostically classified. Two audits involving the review of 700 reports and a comparison of the synoptic elements with the free text interpretations suggest a categorization error rate of <1%. Twenty-seven pathologists each read >40 cases and together assessed 3,690 biopsies. There was considerable inter-rater variability and a trend toward more World Health Organization/International Society of Urologic Pathology Grade 1 cancers in older pathologists. Normalized deviations plots, constructed using the median DR, and standard error can elucidate associated over- and under-calls for an individual pathologist in relation to their practice group. Clinical history completeness by submitting medical doctor varied significantly (100% to 22%). Free text data analyses have some limitations; however, they could be used for data-driven CQI in anatomical pathology, and could lead to the next generation in quality of care.
Douali, Nassim; Csaba, Huszka; De Roo, Jos; Papageorgiou, Elpiniki I; Jaulent, Marie-Christine
2014-01-01
Several studies have described the prevalence and severity of diagnostic errors. Diagnostic errors can arise from cognitive, training, educational and other issues. Examples of cognitive issues include flawed reasoning, incomplete knowledge, faulty information gathering or interpretation, and inappropriate use of decision-making heuristics. We describe a new approach, case-based fuzzy cognitive maps, for medical diagnosis and evaluate it by comparison with Bayesian belief networks. We created a semantic web framework that supports the two reasoning methods. We used database of 174 anonymous patients from several European hospitals: 80 of the patients were female and 94 male with an average age 45±16 (average±stdev). Thirty of the 80 female patients were pregnant. For each patient, signs/symptoms/observables/age/sex were taken into account by the system. We used a statistical approach to compare the two methods. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Case-based clinical reasoning in feline medicine: 2: Managing cognitive error.
Canfield, Paul J; Whitehead, Martin L; Johnson, Robert; O'Brien, Carolyn R; Malik, Richard
2016-03-01
This is Article 2 of a three-part series on clinical reasoning that encourages practitioners to explore and understand how they think and make case-based decisions. It is hoped that, in the process, they will learn to trust their intuition but, at the same time, put in place safeguards to diminish the impact of bias and misguided logic on their diagnostic decision-making. Article 1, published in the January 2016 issue of JFMS, discussed the relative merits and shortcomings of System 1 thinking (immediate and unconscious) and System 2 thinking (effortful and analytical). This second article examines ways of managing cognitive error, particularly the negative impact of bias, when making a diagnosis. Article 3, to appear in the May 2016 issue, explores the use of heuristics (mental short cuts) and illness scripts in diagnostic reasoning. © The Author(s) 2016.
NASA Technical Reports Server (NTRS)
Pallix, Joan B.; Copeland, Richard A.; Arnold, James O. (Technical Monitor)
1995-01-01
Advanced laser-based diagnostics have been developed to examine catalytic effects and atom/surface interactions on thermal protection materials. This study establishes the feasibility of using laser-induced fluorescence for detection of O and N atom loss in a diffusion tube to measure surface catalytic activity. The experimental apparatus is versatile in that it allows fluorescence detection to be used for measuring species selective recombination coefficients as well as diffusion tube and microwave discharge diagnostics. Many of the potential sources of error in measuring atom recombination coefficients by this method have been identified and taken into account. These include scattered light, detector saturation, sample surface cleanliness, reactor design, gas pressure and composition, and selectivity of the laser probe. Recombination coefficients and their associated errors are reported for N and O atoms on a quartz surface at room temperature.
OSA severity assessment based on sleep breathing analysis using ambient microphone.
Dafna, E; Tarasiuk, A; Zigel, Y
2013-01-01
In this paper, an audio-based system for severity estimation of obstructive sleep apnea (OSA) is proposed. The system estimates the apnea-hypopnea index (AHI), which is the average number of apneic events per hour of sleep. This system is based on a Gaussian mixture regression algorithm that was trained and validated on full-night audio recordings. Feature selection process using a genetic algorithm was applied to select the best features extracted from time and spectra domains. A total of 155 subjects, referred to in-laboratory polysomnography (PSG) study, were recruited. Using the PSG's AHI score as a gold-standard, the performances of the proposed system were evaluated using a Pearson correlation, AHI error, and diagnostic agreement methods. Correlation of R=0.89, AHI error of 7.35 events/hr, and diagnostic agreement of 77.3% were achieved, showing encouraging performances and a reliable non-contact alternative method for OSA severity estimation.
Main sources of errors in diagnosis of chronic radiation sickness (in Russian)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soldatova, V.A.
1973-11-01
With the aim of finding out the main sources of errors in the diagnosis of chronic radiation sickness, the author analyzed a total of 500 cases of this sickness in roenigenologists and radiologists sent to the clinic to be examined according to occupational indications. lt was shown that the main source of errors when interpreting the observed deviations as occupational was underestimation of etiological significance of functional and organic diseases of the nervous system, endocrinevascular dystonia and also such diseases as hypochromic anemia and chronic infection. The majority of diagnostic errors is explained by insufficient knowledge of the main regularitymore » of forming the picture of chronic radiation sickness and by the absence of the necessary differential diagnosis with general somatic diseases. (auth)« less
Assimilation of surface NO2 and O3 observations into the SILAM chemistry transport model
NASA Astrophysics Data System (ADS)
Vira, J.; Sofiev, M.
2015-02-01
This paper describes the assimilation of trace gas observations into the chemistry transport model SILAM (System for Integrated modeLling of Atmospheric coMposition) using the 3D-Var method. Assimilation results for the year 2012 are presented for the prominent photochemical pollutants ozone (O3) and nitrogen dioxide (NO2). Both species are covered by the AirBase observation database, which provides the observational data set used in this study. Attention was paid to the background and observation error covariance matrices, which were obtained primarily by the iterative application of a posteriori diagnostics. The diagnostics were computed separately for 2 months representing summer and winter conditions, and further disaggregated by time of day. This enabled the derivation of background and observation error covariance definitions, which included both seasonal and diurnal variation. The consistency of the obtained covariance matrices was verified using χ2 diagnostics. The analysis scores were computed for a control set of observation stations withheld from assimilation. Compared to a free-running model simulation, the correlation coefficient for daily maximum values was improved from 0.8 to 0.9 for O3 and from 0.53 to 0.63 for NO2.
Abboud, Marcus; Calvo-Guirado, Jose Luis; Orentlicher, Gary; Wahl, Gerhard
2013-01-01
This study compared the accuracy of cone beam computed tomography (CBCT) and medical-grade CT in the context of evaluating the diagnostic value and accuracy of fiducial marker localization for reference marker-based guided surgery systems. Cadaver mandibles with attached radiopaque gutta-percha markers, as well as glass balls and composite cylinders of known dimensions, were measured manually with a highly accurate digital caliper. The objects were then scanned using a medical-grade CT scanner (Philips Brilliance 64) and five different CBCT scanners (Sirona Galileos, Morita 3D Accuitomo 80, Vatech PaX-Reve3D, 3M Imtech Iluma, and Planmeca ProMax 3D). The data were then imported into commercially available software, and measurements were made of the scanned markers and objects. CT and CBCT measurements were compared to each other and to the caliper measurements. The difference between the CBCT measurements and the caliper measurements was larger than the difference between the CT measurements and the caliper measurements. Measurements of the cadaver mandible and the geometric reference markers were highly accurate with CT. The average absolute errors of the human mandible measurements were 0.03 mm for CT and 0.23 mm for CBCT. The measurement errors of the geometric objects based on CT ranged between 0.00 and 0.12 mm, compared to an error range between 0.00 and 2.17 mm with the CBCT scanners. CT provided the most accurate images in this study, closely followed by one CBCT of the five tested. Although there were differences in the distance measurements of the hard tissue of the human mandible between CT and CBCT, these differences may not be of clinical significance for most diagnostic purposes. The fiducial marker localization error caused by some CBCT scanners may be a problem for guided surgery systems.
Discrepancies in reporting the CAG repeat lengths for Huntington's disease
Quarrell, Oliver W; Handley, Olivia; O'Donovan, Kirsty; Dumoulin, Christine; Ramos-Arroyo, Maria; Biunno, Ida; Bauer, Peter; Kline, Margaret; Landwehrmeyer, G Bernhard
2012-01-01
Huntington's disease results from a CAG repeat expansion within the Huntingtin gene; this is measured routinely in diagnostic laboratories. The European Huntington's Disease Network REGISTRY project centrally measures CAG repeat lengths on fresh samples; these were compared with the original results from 121 laboratories across 15 countries. We report on 1326 duplicate results; a discrepancy in reporting the upper allele occurred in 51% of cases, this reduced to 13.3% and 9.7% when we applied acceptable measurement errors proposed by the American College of Medical Genetics and the Draft European Best Practice Guidelines, respectively. Duplicate results were available for 1250 lower alleles; discrepancies occurred in 40% of cases. Clinically significant discrepancies occurred in 4.0% of cases with a potential unexplained misdiagnosis rate of 0.3%. There was considerable variation in the discrepancy rate among 10 of the countries participating in this study. Out of 1326 samples, 348 were re-analysed by an accredited diagnostic laboratory, based in Germany, with concordance rates of 93% and 94% for the upper and lower alleles, respectively. This became 100% if the acceptable measurement errors were applied. The central laboratory correctly reported allele sizes for six standard reference samples, blind to the known result. Our study differs from external quality assessment (EQA) schemes in that these are duplicate results obtained from a large sample of patients across the whole diagnostic range. We strongly recommend that laboratories state an error rate for their measurement on the report, participate in EQA schemes and use reference materials regularly to adjust their own internal standards. PMID:21811303
NASA Technical Reports Server (NTRS)
Todling, Ricardo
2015-01-01
Recently, this author studied an approach to the estimation of system error based on combining observation residuals derived from a sequential filter and fixed lag-1 smoother. While extending the methodology to a variational formulation, experimenting with simple models and making sure consistency was found between the sequential and variational formulations, the limitations of the residual-based approach came clearly to the surface. This note uses the sequential assimilation application to simple nonlinear dynamics to highlight the issue. Only when some of the underlying error statistics are assumed known is it possible to estimate the unknown component. In general, when considerable uncertainties exist in the underlying statistics as a whole, attempts to obtain separate estimates of the various error covariances are bound to lead to misrepresentation of errors. The conclusions are particularly relevant to present-day attempts to estimate observation-error correlations from observation residual statistics. A brief illustration of the issue is also provided by comparing estimates of error correlations derived from a quasi-operational assimilation system and a corresponding Observing System Simulation Experiments framework.
[Medical errors: inevitable but preventable].
Giard, R W
2001-10-27
Medical errors are increasingly reported in the lay press. Studies have shown dramatic error rates of 10 percent or even higher. From a methodological point of view, studying the frequency and causes of medical errors is far from simple. Clinical decisions on diagnostic or therapeutic interventions are always taken within a clinical context. Reviewing outcomes of interventions without taking into account both the intentions and the arguments for a particular action will limit the conclusions from a study on the rate and preventability of errors. The interpretation of the preventability of medical errors is fraught with difficulties and probably highly subjective. Blaming the doctor personally does not do justice to the actual situation and especially the organisational framework. Attention for and improvement of the organisational aspects of error are far more important then litigating the person. To err is and will remain human and if we want to reduce the incidence of faults we must be able to learn from our mistakes. That requires an open attitude towards medical mistakes, a continuous effort in their detection, a sound analysis and, where feasible, the institution of preventive measures.
McMahon, Camilla M.; Henderson, Heather A.
2014-01-01
Error-monitoring, or the ability to recognize one's mistakes and implement behavioral changes to prevent further mistakes, may be impaired in individuals with Autism Spectrum Disorder (ASD). Children and adolescents (ages 9-19) with ASD (n = 42) and typical development (n = 42) completed two face processing tasks that required discrimination of either the gender or affect of standardized face stimuli. Post-error slowing and the difference in Error-Related Negativity amplitude between correct and incorrect responses (ERNdiff) were used to index error-monitoring ability. Overall, ERNdiff increased with age. On the Gender Task, individuals with ASD had a smaller ERNdiff than individuals with typical development; however, on the Affect Task, there were no significant diagnostic group differences on ERNdiff. Individuals with ASD may have ERN amplitudes similar to those observed in individuals with typical development in more social contexts compared to less social contexts due to greater consequences for errors, more effortful processing, and/or reduced processing efficiency in these contexts. Across all participants, more post-error slowing on the Affect Task was associated with better social cognitive skills. PMID:25066088
Shahly, Victoria; Berglund, Patricia A; Coulouvrat, Catherine; Fitzgerald, Timothy; Hajak, Goeran; Roth, Thomas; Shillington, Alicia C; Stephenson, Judith J; Walsh, James K; Kessler, Ronald C
2012-10-01
Insomnia is a common and seriously impairing condition that often goes unrecognized. To examine associations of broadly defined insomnia (ie, meeting inclusion criteria for a diagnosis from International Statistical Classification of Diseases, 10th Revision, DSM-IV, or Research Diagnostic Criteria/International Classification of Sleep Disorders, Second Edition) with costly workplace accidents and errors after excluding other chronic conditions among workers in the America Insomnia Survey (AIS). A national cross-sectional telephone survey (65.0% cooperation rate) of commercially insured health plan members selected from the more than 34 million in the HealthCore Integrated Research Database. Four thousand nine hundred ninety-one employed AIS respondents. Costly workplace accidents or errors in the 12 months before the AIS interview were assessed with one question about workplace accidents "that either caused damage or work disruption with a value of $500 or more" and another about other mistakes "that cost your company $500 or more." Current insomnia with duration of at least 12 months was assessed with the Brief Insomnia Questionnaire, a validated (area under the receiver operating characteristic curve, 0.86 compared with diagnoses based on blinded clinical reappraisal interviews), fully structured diagnostic interview. Eighteen other chronic conditions were assessed with medical/pharmacy claims records and validated self-report scales. Insomnia had a significant odds ratio with workplace accidents and/or errors controlled for other chronic conditions (1.4). The odds ratio did not vary significantly with respondent age, sex, educational level, or comorbidity. The average costs of insomnia-related accidents and errors ($32 062) were significantly higher than those of other accidents and errors ($21 914). Simulations estimated that insomnia was associated with 7.2% of all costly workplace accidents and errors and 23.7% of all the costs of these incidents. These proportions are higher than for any other chronic condition, with annualized US population projections of 274 000 costly insomnia-related workplace accidents and errors having a combined value of US $31.1 billion. Effectiveness trials are needed to determine whether expanded screening, outreach, and treatment of workers with insomnia would yield a positive return on investment for employers.
Alfsen, G Cecilie; Chen, Ying; Kähler, Hanne; Bukholm, Ida Rashida Khan
2016-12-01
The Norwegian System of Patient Injury Compensation (NPE) processes compensation claims from patients who complain about malpractice in the health services. A wrong diagnosis in pathology may cause serious injury to the patient, but the incidence of compensation claims is unknown, because pathology is not specified as a separate category in NPE’s statistics. Knowledge about errors is required to assess quality-enhancing measures. We have therefore searched through the NPE records to identify cases whose background stems from errors committed in pathology departments and laboratories. We have searched through the NPE records for cases related to pathology for the years 2010 – 2015. During this period the NPE processed a total of 26 600 cases, of which 93 were related to pathology. The compensation claim was upheld in 66 cases, resulting in total compensation payments amounting to NOK 63 million. False-negative results in the form of undetected diagnoses were the most frequent grounds for compensation claims (63 cases), with an undetected malignant melanoma (n = 23) or atypia in cell samples from the cervix uteri (n = 16) as the major groups. Sixteen cases involved non-diagnostic issues such as mix-up of samples (n = 8), contamination of samples (n = 4) or delayed responses (n = 4). The number of compensation claims caused by errors in pathology diagnostics is low in relative terms. The errors may, however, be of a serious nature, especially if malignant conditions are overlooked or samples mixed up.
Teaching clinical reasoning: case-based and coached.
Kassirer, Jerome P
2010-07-01
Optimal medical care is critically dependent on clinicians' skills to make the right diagnosis and to recommend the most appropriate therapy, and acquiring such reasoning skills is a key requirement at every level of medical education. Teaching clinical reasoning is grounded in several fundamental principles of educational theory. Adult learning theory posits that learning is best accomplished by repeated, deliberate exposure to real cases, that case examples should be selected for their reflection of multiple aspects of clinical reasoning, and that the participation of a coach augments the value of an educational experience. The theory proposes that memory of clinical medicine and clinical reasoning strategies is enhanced when errors in information, judgment, and reasoning are immediately pointed out and discussed. Rather than using cases artificially constructed from memory, real cases are greatly preferred because they often reflect the false leads, the polymorphisms of actual clinical material, and the misleading test results encountered in everyday practice. These concepts foster the teaching and learning of the diagnostic process, the complex trade-offs between the benefits and risks of diagnostic tests and treatments, and cognitive errors in clinical reasoning. The teaching of clinical reasoning need not and should not be delayed until students gain a full understanding of anatomy and pathophysiology. Concepts such as hypothesis generation, pattern recognition, context formulation, diagnostic test interpretation, differential diagnosis, and diagnostic verification provide both the language and the methods of clinical problem solving. Expertise is attainable even though the precise mechanisms of achieving it are not known.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Traci L.; Sharon, Keren, E-mail: tljohn@umich.edu
Until now, systematic errors in strong gravitational lens modeling have been acknowledged but have never been fully quantified. Here, we launch an investigation into the systematics induced by constraint selection. We model the simulated cluster Ares 362 times using random selections of image systems with and without spectroscopic redshifts and quantify the systematics using several diagnostics: image predictability, accuracy of model-predicted redshifts, enclosed mass, and magnification. We find that for models with >15 image systems, the image plane rms does not decrease significantly when more systems are added; however, the rms values quoted in the literature may be misleading asmore » to the ability of a model to predict new multiple images. The mass is well constrained near the Einstein radius in all cases, and systematic error drops to <2% for models using >10 image systems. Magnification errors are smallest along the straight portions of the critical curve, and the value of the magnification is systematically lower near curved portions. For >15 systems, the systematic error on magnification is ∼2%. We report no trend in magnification error with the fraction of spectroscopic image systems when selecting constraints at random; however, when using the same selection of constraints, increasing this fraction up to ∼0.5 will increase model accuracy. The results suggest that the selection of constraints, rather than quantity alone, determines the accuracy of the magnification. We note that spectroscopic follow-up of at least a few image systems is crucial because models without any spectroscopic redshifts are inaccurate across all of our diagnostics.« less
Stinchfield, Randy; McCready, John; Turner, Nigel E; Jimenez-Murcia, Susana; Petry, Nancy M; Grant, Jon; Welte, John; Chapman, Heather; Winters, Ken C
2016-09-01
The DSM-5 was published in 2013 and it included two substantive revisions for gambling disorder (GD). These changes are the reduction in the threshold from five to four criteria and elimination of the illegal activities criterion. The purpose of this study was to twofold. First, to assess the reliability, validity and classification accuracy of the DSM-5 diagnostic criteria for GD. Second, to compare the DSM-5-DSM-IV on reliability, validity, and classification accuracy, including an examination of the effect of the elimination of the illegal acts criterion on diagnostic accuracy. To compare DSM-5 and DSM-IV, eight datasets from three different countries (Canada, USA, and Spain; total N = 3247) were used. All datasets were based on similar research methods. Participants were recruited from outpatient gambling treatment services to represent the group with a GD and from the community to represent the group without a GD. All participants were administered a standardized measure of diagnostic criteria. The DSM-5 yielded satisfactory reliability, validity and classification accuracy. In comparing the DSM-5 to the DSM-IV, most comparisons of reliability, validity and classification accuracy showed more similarities than differences. There was evidence of modest improvements in classification accuracy for DSM-5 over DSM-IV, particularly in reduction of false negative errors. This reduction in false negative errors was largely a function of lowering the cut score from five to four and this revision is an improvement over DSM-IV. From a statistical standpoint, eliminating the illegal acts criterion did not make a significant impact on diagnostic accuracy. From a clinical standpoint, illegal acts can still be addressed in the context of the DSM-5 criterion of lying to others.
Pelaccia, Thierry; Tardif, Jacques; Triby, Emmanuel; Ammirati, Christine; Bertrand, Catherine; Dory, Valérie; Charlin, Bernard
2014-12-01
The ability to make a diagnosis is a crucial skill in emergency medicine. Little is known about the way emergency physicians reach a diagnosis. This study aims to identify how and when, during the initial patient examination, emergency physicians generate and evaluate diagnostic hypotheses. We carried out a qualitative research project based on semistructured interviews with emergency physicians. The interviews concerned management of an emergency situation during routine medical practice. They were associated with viewing the video recording of emergency situations filmed in an "own-point-of-view" perspective. The emergency physicians generated an average of 5 diagnostic hypotheses. Most of these hypotheses were generated before meeting the patient or within the first 5 minutes of the meeting. The hypotheses were then rank ordered within the context of a verification procedure based on identifying key information. These tasks were usually accomplished without conscious effort. No hypothesis was completely confirmed or refuted until the results of investigations were available. The generation and rank ordering of diagnostic hypotheses is based on the activation of cognitive processes, enabling expert emergency physicians to process environmental information and link it to past experiences. The physicians seemed to strive to avoid the risk of error by remaining aware of the possibility of alternative hypotheses as long as they did not have the results of investigations. Understanding the diagnostic process used by emergency physicians provides interesting ideas for training residents in a specialty in which the prevalence of reasoning errors leading to incorrect diagnoses is high. Copyright © 2014 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.
Becker, Elizabeth A.; Griffith, Derek M.; West, Brady T.; Janz, Nancy K.; Resnicow, Ken; Morris, Arden M.
2015-01-01
Background Screening and post-symptomatic diagnostic testing are often conflated in cancer screening surveillance research. We examined the error in estimated colorectal cancer (CRC) screening prevalence due to the conflation of screening and diagnostic testing. Methods Using data from the 2008 National Health Interview Survey, we compared weighted prevalence estimates of the use of all testing (screening and diagnostic) and screening in at-risk adults, and calculated the overestimation of screening prevalence across socio-demographic groups. Results The population screening prevalence was overestimated by 23.3%, and the level of overestimation varied widely across socio-demographic groups (median 22.6%, mean 24.8%). The highest levels of overestimation were in non-Hispanic White females (27.4%), adults ages 50–54 (32.0%), and those with the highest socioeconomic vulnerability (low educational attainment (31.3%), low poverty ratio (32.5%), no usual source of health care (54.4%) and not insured (51.6%)) (all p-values < 0.001). Conclusions When the impetus for testing was not included, CRC screening prevalence was overestimated, and patterns of overestimation often aligned with social and economic vulnerability. These results are of concern to researchers who utilize survey data from the Behavioral Risk Factor Surveillance System (BRFSS) to assess cancer screening behaviors, as it is currently not designed to distinguish diagnostic testing from screening. Impact Surveillance research in cancer screening that does not consider the impetus for testing risks measurement error of screening prevalence, impeding progress toward improving population health. Ultimately, in order to craft relevant screening benchmarks and interventions, we must look beyond ‘what’ and ‘when’ and include ‘why.’ PMID:26491056
Adenocarcinoma in situ of the cervix.
Schoolland, Meike; Segal, Amanda; Allpress, Stephen; Miranda, Alina; Frost, Felicity A; Sterrett, Gregory F
2002-12-25
The current study examines 1) the sensitivity of detection and 2) sampling and screening/diagnostic error in the cytologic diagnosis of adenocarcinoma in situ (AIS) of the cervix. The data were taken from public and private sector screening laboratories reporting 25,000 and 80,000 smears, respectively, each year. The study group was comprised of women with a biopsy diagnosis of AIS or AIS combined with a high-grade squamous intraepithelial lesion (HSIL) who were accessioned by the Western Australian Cervical Cytology Registry (WACCR) between 1993-1998. Cervical smears reported by the Western Australia Centre for Pathology and Medical Research (PathCentre) or Western Diagnostic Pathology (WDP) in the 36 months before the index biopsy was obtained were retrieved. A true measure of the sensitivity of detection could not be determined because to the authors' knowledge the exact prevalence of disease is unknown at present. For the current study, sensitivity was defined as the percentage of smears reported as demonstrating a possible or definite high-grade epithelial abnormality (HGEA), either glandular or squamous. Sampling error was defined as the percentage of smears found to have no HGEA on review. Screening/diagnostic error was defined as the percentage of smears in which HGEA was not diagnosed initially but review demonstrated possible or definite HGEA. Sensitivity also was calculated for a randomly selected control group of biopsy proven cases of Grade 3 cervical intraepithelial neoplasia (CIN 3) accessioned at the WACCR in 1999. For biopsy findings of AIS alone, the diagnostic "sensitivity" of a single smear was 47.6% for the PathCentre and 54.3% for WDP. Nearly all the abnormalities were reported as glandular. The sampling and screening/diagnostic errors were 47.6% and 4.8%, respectively, for the PathCentre and 33.3% and 12.3%, respectively, for WDP. The results from the PathCentre were better for AIS plus HSIL than for AIS alone, but the results from WDP were similar for both groups. For the CIN 3 control cases, the "sensitivity" of a single smear was 42.5%. To the authors' knowledge epidemiologic studies published to date have not demonstrated a benefit from screening for precursors of cervical adenocarcinoma. However, in the study laboratories as in many others, reasonable expertise in diagnosing AIS has been acquired only within the last 10-15 years, which may be too short a period in which to demonstrate a significant effect. The results of the current study provide some encouraging baseline data regarding the sensitivity of the Papanicolaou smear in detecting AIS. Further improvements in sampling and cytodiagnosis may be possible. Copyright 2002 American Cancer Society.
Löpprich, Martin; Krauss, Felix; Ganzinger, Matthias; Senghas, Karsten; Riezler, Stefan; Knaup, Petra
2016-08-05
In the Multiple Myeloma clinical registry at Heidelberg University Hospital, most data are extracted from discharge letters. Our aim was to analyze if it is possible to make the manual documentation process more efficient by using methods of natural language processing for multiclass classification of free-text diagnostic reports to automatically document the diagnosis and state of disease of myeloma patients. The first objective was to create a corpus consisting of free-text diagnosis paragraphs of patients with multiple myeloma from German diagnostic reports, and its manual annotation of relevant data elements by documentation specialists. The second objective was to construct and evaluate a framework using different NLP methods to enable automatic multiclass classification of relevant data elements from free-text diagnostic reports. The main diagnoses paragraph was extracted from the clinical report of one third randomly selected patients of the multiple myeloma research database from Heidelberg University Hospital (in total 737 selected patients). An EDC system was setup and two data entry specialists performed independently a manual documentation of at least nine specific data elements for multiple myeloma characterization. Both data entries were compared and assessed by a third specialist and an annotated text corpus was created. A framework was constructed, consisting of a self-developed package to split multiple diagnosis sequences into several subsequences, four different preprocessing steps to normalize the input data and two classifiers: a maximum entropy classifier (MEC) and a support vector machine (SVM). In total 15 different pipelines were examined and assessed by a ten-fold cross-validation, reiterated 100 times. For quality indication the average error rate and the average F1-score were conducted. For significance testing the approximate randomization test was used. The created annotated corpus consists of 737 different diagnoses paragraphs with a total number of 865 coded diagnosis. The dataset is publicly available in the supplementary online files for training and testing of further NLP methods. Both classifiers showed low average error rates (MEC: 1.05; SVM: 0.84) and high F1-scores (MEC: 0.89; SVM: 0.92). However the results varied widely depending on the classified data element. Preprocessing methods increased this effect and had significant impact on the classification, both positive and negative. The automatic diagnosis splitter increased the average error rate significantly, even if the F1-score decreased only slightly. The low average error rates and high average F1-scores of each pipeline demonstrate the suitability of the investigated NPL methods. However, it was also shown that there is no best practice for an automatic classification of data elements from free-text diagnostic reports.
Systematic errors in Monsoon simulation: importance of the equatorial Indian Ocean processes
NASA Astrophysics Data System (ADS)
Annamalai, H.; Taguchi, B.; McCreary, J. P., Jr.; Nagura, M.; Miyama, T.
2015-12-01
H. Annamalai1, B. Taguchi2, J.P. McCreary1, J. Hafner1, M. Nagura2, and T. Miyama2 International Pacific Research Center, University of Hawaii, USA Application Laboratory, JAMSTEC, Japan In climate models, simulating the monsoon precipitation climatology remains a grand challenge. Compared to CMIP3, the multi-model-mean (MMM) errors for Asian-Australian monsoon (AAM) precipitation climatology in CMIP5, relative to GPCP observations, have shown little improvement. One of the implications is that uncertainties in the future projections of time-mean changes to AAM rainfall may not have reduced from CMIP3 to CMIP5. Despite dedicated efforts by the modeling community, the progress in monsoon modeling is rather slow. This leads us to wonder: Has the scientific community reached a "plateau" in modeling mean monsoon precipitation? Our focus here is to better understanding of the coupled air-sea interactions, and moist processes that govern the precipitation characteristics over the tropical Indian Ocean where large-scale errors persist. A series idealized coupled model experiments are performed to test the hypothesis that errors in the coupled processes along the equatorial Indian Ocean during inter-monsoon seasons could potentially influence systematic errors during the monsoon season. Moist static energy budget diagnostics has been performed to identify the leading moist and radiative processes that account for the large-scale errors in the simulated precipitation. As a way forward, we propose three coordinated efforts, and they are: (i) idealized coupled model experiments; (ii) process-based diagnostics and (iii) direct observations to constrain model physics. We will argue that a systematic and coordinated approach in the identification of the various interactive processes that shape the precipitation basic state needs to be carried out, and high-quality observations over the data sparse monsoon region are needed to validate models and further improve model physics.
Leyland, M J; Beurskens, M N A; Flanagan, J C; Frassinetti, L; Gibson, K J; Kempenaars, M; Maslov, M; Scannell, R
2016-01-01
The Joint European Torus (JET) high resolution Thomson scattering (HRTS) system measures radial electron temperature and density profiles. One of the key capabilities of this diagnostic is measuring the steep pressure gradient, termed the pedestal, at the edge of JET plasmas. The pedestal is susceptible to limiting instabilities, such as Edge Localised Modes (ELMs), characterised by a periodic collapse of the steep gradient region. A common method to extract the pedestal width, gradient, and height, used on numerous machines, is by performing a modified hyperbolic tangent (mtanh) fit to overlaid profiles selected from the same region of the ELM cycle. This process of overlaying profiles, termed ELM synchronisation, maximises the number of data points defining the pedestal region for a given phase of the ELM cycle. When fitting to HRTS profiles, it is necessary to incorporate the diagnostic radial instrument function, particularly important when considering the pedestal width. A deconvolved fit is determined by a forward convolution method requiring knowledge of only the instrument function and profiles. The systematic error due to the deconvolution technique incorporated into the JET pedestal fitting tool has been documented by Frassinetti et al. [Rev. Sci. Instrum. 83, 013506 (2012)]. This paper seeks to understand and quantify the systematic error introduced to the pedestal width due to ELM synchronisation. Synthetic profiles, generated with error bars and point-to-point variation characteristic of real HRTS profiles, are used to evaluate the deviation from the underlying pedestal width. We find on JET that the ELM synchronisation systematic error is negligible in comparison to the statistical error when assuming ten overlaid profiles (typical for a pre-ELM fit to HRTS profiles). This confirms that fitting a mtanh to ELM synchronised profiles is a robust and practical technique for extracting the pedestal structure.
Gupta, Nalini; Banik, Tarak; Rajwanshi, Arvind; Radotra, Bishan D; Panda, Naresh; Dey, Pranab; Srinivasan, Radhika; Nijhawan, Raje
2012-01-01
This study was undertaken to evaluate the diagnostic utility and pitfalls of fine needle aspiration cytology (FNAC) in oral and oropharyngeal lesions. This was a retrospective audit of oral and oropharyngeal lesions diagnosed with FNAC over a period of six years (2005-2010). Oral/oropharyngeal lesions [n=157] comprised 0.35% of the total FNAC load. The age ranged 1-80 years with the male: female ratio being 1.4:1. Aspirates were inadequate in 7% cases. Histopathology was available in 73/157 (46.5%) cases. Palate was the most common site of involvement [n=66] followed by tongue [n=35], buccal mucosa [n=18], floor of the mouth [n=17], tonsil [n=10], alveolus [n=5], retromolar trigone [n=3], and posterior pharyngeal wall [n=3]. Cytodiagnoses were categorized into infective/inflammatory lesions and benign cysts, and benign and malignant tumours. Uncommon lesions included ectopic lingual thyroid and adult rhabdomyoma of tongue, and solitary fibrous tumor (SFT), and leiomyosarcoma in buccal mucosa. A single false-positive case was dense inflammation with squamous cells misinterpreted as squamous cell carcinoma (SCC) on cytology. There were eight false-negative cases mainly due to sampling error. One false-negative case due to interpretation error was in a salivary gland tumor. The sensitivity of FNAC in diagnosing oral/oropharyngeal lesions was 71.4%; specificity was 97.8% with diagnostic accuracy of 87.7%. Salivary gland tumors and squamous cell carcinoma (SCC) are the most common lesions seen in the oral cavity. FNAC proves to be highly effective in diagnosing the spectrum of different lesions in this region. Sampling error is the main cause of false-negative cases in this region.
An easy-to-use diagnostic system development shell
NASA Technical Reports Server (NTRS)
Tsai, L. C.; Ross, J. B.; Han, C. Y.; Wee, W. G.
1987-01-01
The Diagnostic System Development Shell (DSDS), an expert system development shell for diagnostic systems, is described. The major objective of building the DSDS is to create a very easy to use and friendly environment for knowledge engineers and end-users. The DSDS is written in OPS5 and CommonLisp. It runs on a VAX/VMS system. A set of domain independent, generalized rules is built in the DSDS, so the users need not be concerned about building the rules. The facts are explicitly represented in a unified format. A powerful check facility which helps the user to check the errors in the created knowledge bases is provided. A judgement facility and other useful facilities are also available. A diagnostic system based on the DSDS system is question driven and can call or be called by other knowledge based systems written in OPS5 and CommonLisp. A prototype diagnostic system for diagnosing a Philips constant potential X-ray system has been built using the DSDS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hawke, J.; Scannell, R.; Maslov, M.
2013-10-15
This work isolated the cause of the observed discrepancy between the electron temperature (T{sub e}) measurements before and after the JET Core LIDAR Thomson Scattering (TS) diagnostic was upgraded. In the upgrade process, stray light filters positioned just before the detectors were removed from the system. Modelling showed that the shift imposed on the stray light filters transmission functions due to the variations in the incidence angles of the collected photons impacted plasma measurements. To correct for this identified source of error, correction factors were developed using ray tracing models for the calibration and operational states of the diagnostic. Themore » application of these correction factors resulted in an increase in the observed T{sub e}, resulting in the partial if not complete removal of the observed discrepancy in the measured T{sub e} between the JET core LIDAR TS diagnostic, High Resolution Thomson Scattering, and the Electron Cyclotron Emission diagnostics.« less
Draft versus finished sequence data for DNA and protein diagnostic signature development
Gardner, Shea N.; Lam, Marisa W.; Smith, Jason R.; Torres, Clinton L.; Slezak, Tom R.
2005-01-01
Sequencing pathogen genomes is costly, demanding careful allocation of limited sequencing resources. We built a computational Sequencing Analysis Pipeline (SAP) to guide decisions regarding the amount of genomic sequencing necessary to develop high-quality diagnostic DNA and protein signatures. SAP uses simulations to estimate the number of target genomes and close phylogenetic relatives (near neighbors or NNs) to sequence. We use SAP to assess whether draft data are sufficient or finished sequencing is required using Marburg and variola virus sequences. Simulations indicate that intermediate to high-quality draft with error rates of 10−3–10−5 (∼8× coverage) of target organisms is suitable for DNA signature prediction. Low-quality draft with error rates of ∼1% (3× to 6× coverage) of target isolates is inadequate for DNA signature prediction, although low-quality draft of NNs is sufficient, as long as the target genomes are of high quality. For protein signature prediction, sequencing errors in target genomes substantially reduce the detection of amino acid sequence conservation, even if the draft is of high quality. In summary, high-quality draft of target and low-quality draft of NNs appears to be a cost-effective investment for DNA signature prediction, but may lead to underestimation of predicted protein signatures. PMID:16243783
Rennie, Waverly; Phetsouvanh, Rattanaxay; Lupisan, Socorro; Vanisaveth, Viengsay; Hongvanthong, Bouasy; Phompida, Samlane; Alday, Portia; Fulache, Mila; Lumagui, Richard; Jorgensen, Pernille; Bell, David; Harvey, Steven
2007-01-01
The usefulness of rapid diagnostic tests (RDT) in malaria case management depends on the accuracy of the diagnoses they provide. Despite their apparent simplicity, previous studies indicate that RDT accuracy is highly user-dependent. As malaria RDTs will frequently be used in remote areas with little supervision or support, minimising mistakes is crucial. This paper describes the development of new instructions (job aids) to improve health worker performance, based on observations of common errors made by remote health workers and villagers in preparing and interpreting RDTs, in the Philippines and Laos. Initial preparation using the instructions provided by the manufacturer was poor, but improved significantly with the job aids (e.g. correct use both of the dipstick and cassette increased in the Philippines by 17%). However, mistakes in preparation remained commonplace, especially for dipstick RDTs, as did mistakes in interpretation of results. A short orientation on correct use and interpretation further improved accuracy, from 70% to 80%. The results indicate that apparently simple diagnostic tests can be poorly performed and interpreted, but provision of clear, simple instructions can reduce these errors. Preparation of appropriate instructions and training as well as monitoring of user behaviour are an essential part of rapid test implementation.
Avoiding Misdiagnosis in Patients with Neurological Emergencies
Pope, Jennifer V.; Edlow, Jonathan A.
2012-01-01
Approximately 5% of patients presenting to emergency departments have neurological symptoms. The most common symptoms or diagnoses include headache, dizziness, back pain, weakness, and seizure disorder. Little is known about the actual misdiagnosis of these patients, which can have disastrous consequences for both the patients and the physicians. This paper reviews the existing literature about the misdiagnosis of neurological emergencies and analyzes the reason behind the misdiagnosis by specific presenting complaint. Our goal is to help emergency physicians and other providers reduce diagnostic error, understand how these errors are made, and improve patient care. PMID:22888439
Diagnostic aids: the Surgical Sieve revisited.
Chai, Jason; Evans, Lloyd; Hughes, Tom
2017-08-01
Diagnostic errors are well documented in the literature and emphasise the need to teach diagnostic skills at an early stage in medical school to create effective and safe clinicians. Hence, there may be a place for diagnostic aids (such as the Surgical Sieve) that provide a framework for generating ideas about diagnoses. With repeated use of the Surgical Sieve in teaching sessions with students, and prompted by the traditional handheld wheels used in antenatal clinics, we developed the Compass Medicine, a handheld diagnostic wheel comprising three concentric discs attached at the centre. We report a preliminary study comparing the Surgical Sieve and the Compass Medicine in generating differential diagnoses. A total of 48 third-year medical students from Cardiff University participated in a study aimed at measuring the efficacy of diagnostic aids (Surgical Sieve and Compass Medicine) in generating diagnoses. We quantified the effect each aid had on the number of diagnoses generated, and compared the size of the effect between the two diagnostic aids. There may be a place for diagnostic aids that provide a framework for generating ideas about diagnoses RESULTS: The study suggests that both diagnostic aids prompted users to generate a greater number of diagnoses, but there was no significant difference in the size of effect between the two diagnostic aids. We hope that our study with diagnostic aids will encourage the use of robust tools to teach medical students an easily visualised framework for diagnostic thinking. © 2016 John Wiley & Sons Ltd and The Association for the Study of Medical Education.
Patient Safety in the Context of Neonatal Intensive Care: Research and Educational Opportunities
Raju, Tonse N. K.; Suresh, Gautham; Higgins, Rosemary D.
2012-01-01
Case reports and observational studies continue to report adverse events from medical errors. However, despite considerable attention to patient safety in the popular media, this topic is not a regular component of medical education, and much research needs to be carried out to understand the causes, consequences, and prevention of healthcare-related adverse events during neonatal intensive care. To address the knowledge gaps and to formulate a research and educational agenda in neonatology, the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD) invited a panel of experts to a workshop in August 2010. Patient safety issues discussed were: the reasons for errors, including systems design, working conditions, and worker fatigue; a need to develop a “culture” of patient safety; the role of electronic medical records, information technology, and simulators in reducing errors; error disclosure practices; medico-legal concerns; and educational needs. Specific neonatology-related topics discussed were: errors during resuscitation, mechanical ventilation, and performance of invasive procedures; medication errors including those associated with milk feedings; diagnostic errors; and misidentification of patients. This article provides an executive summary of the workshop. PMID:21386749
Kuchenbecker, Joern
2018-05-22
Pseudoisochromatic colour plates are constructed according to specific principles. They can be very different in quality. To check the diagnostic quality, they have to be tested on a large number of subjects, but this procedure is can be tedious and expensive. Therefore, the use of a standardised web-based test is recommended. Eight Pflüger trident colour plates (including 1 demo plate) according to the Velhagen edition of 1980 were digitised and inserted into a web-based colour vision test (www.color-vision-test.info). After visual display calibration and 2 demonstrations of the demo plate (#1) to introduce the test procedure, 7 red-green colour plates (#3, 4, 10, 11, 12, 13, 16) were presented in a randomised order in 3 different randomised positions each for 10 seconds. The user had to specify the opening of the Pflüger trident by a mouse click or arrow keys. 6360 evaluations of all plates from 2120 randomised subjects were included. Without error, the detection rates of the plates were between 72.2% (plate #3) and 90.7% (plate #16; n = 6360). With an error number of 7 errors per test, the detection rates of the plates were between 21.6% (plate #3) and 67.7% (plate #16; n = 1556). If an error number of 14 errors was used, the detection rates of the plates were between 10.9% (plate #11) and 40.1% (plate #16; n = 606). Plate #16 showed the highest detection rate - at zero error number as well as at the 7 and 14 error limit. The diagnostic quality of this plate was low. The colourimetric data were improved. The detection rate was then significantly lower. The differences in quality of pseudoisochromatic Pflüger trident colour plates can be tested without great effort using a web-based test. Optimisation of a poor quality colour plate can then be carried out. Georg Thieme Verlag KG Stuttgart · New York.
Decision making in trauma settings: simulation to improve diagnostic skills.
Murray, David J; Freeman, Brad D; Boulet, John R; Woodhouse, Julie; Fehr, James J; Klingensmith, Mary E
2015-06-01
In the setting of acute injury, a wrong, missed, or delayed diagnosis can impact survival. Clinicians rely on pattern recognition and heuristics to rapidly assess injuries, but an overreliance on these approaches can result in a diagnostic error. Simulation has been advocated as a method for practitioners to learn how to recognize the limitations of heuristics and develop better diagnostic skills. The objective of this study was to determine whether simulation could be used to provide teams the experiences in managing scenarios that require the use of heuristic as well as analytic diagnostic skills to effectively recognize and treat potentially life-threatening injuries. Ten scenarios were developed to assess the ability of trauma teams to provide initial care to a severely injured patient. Seven standard scenarios simulated severe injuries that once diagnosed could be effectively treated using standard Advanced Trauma Life Support algorithms. Because diagnostic error occurs more commonly in complex clinical settings, 3 complex scenarios required teams to use more advanced diagnostic skills to uncover a coexisting condition and treat the patient. Teams composed of 3 to 5 practitioners were evaluated in the performance of 7 (of 10) randomly selected scenarios (5 standard, 2 complex). Expert rates scored teams using standardized checklists and global scores. Eighty-three surgery, emergency medicine, and anesthesia residents constituted 21 teams. Expert raters were able to reliably score the scenarios. Teams accomplished fewer checklist actions and received lower global scores on the 3 analytic scenarios (73.8% [12.3%] and 5.9 [1.6], respectively) compared with the 7 heuristic scenarios (83.2% [11.7%] and 6.6 [1.3], respectively; P < 0.05 for both). Teams led by more junior residents received higher global scores on the analytic scenarios (6.4 [1.3]) than the more senior team leaders (5.3 [1.7]). This preliminary study indicates that teams led by more senior residents received higher scores when managing heuristic scenarios but were less effective when managing the scenarios that require a more analytic approach. Simulation can be used to provide teams with decision-making experiences in trauma settings and could be used to improve diagnostic skills as well as study the decision-making process.
Installé, Arnaud Jf; Van den Bosch, Thierry; De Moor, Bart; Timmerman, Dirk
2014-10-20
Using machine-learning techniques, clinical diagnostic model research extracts diagnostic models from patient data. Traditionally, patient data are often collected using electronic Case Report Form (eCRF) systems, while mathematical software is used for analyzing these data using machine-learning techniques. Due to the lack of integration between eCRF systems and mathematical software, extracting diagnostic models is a complex, error-prone process. Moreover, due to the complexity of this process, it is usually only performed once, after a predetermined number of data points have been collected, without insight into the predictive performance of the resulting models. The objective of the study of Clinical Data Miner (CDM) software framework is to offer an eCRF system with integrated data preprocessing and machine-learning libraries, improving efficiency of the clinical diagnostic model research workflow, and to enable optimization of patient inclusion numbers through study performance monitoring. The CDM software framework was developed using a test-driven development (TDD) approach, to ensure high software quality. Architecturally, CDM's design is split over a number of modules, to ensure future extendability. The TDD approach has enabled us to deliver high software quality. CDM's eCRF Web interface is in active use by the studies of the International Endometrial Tumor Analysis consortium, with over 4000 enrolled patients, and more studies planned. Additionally, a derived user interface has been used in six separate interrater agreement studies. CDM's integrated data preprocessing and machine-learning libraries simplify some otherwise manual and error-prone steps in the clinical diagnostic model research workflow. Furthermore, CDM's libraries provide study coordinators with a method to monitor a study's predictive performance as patient inclusions increase. To our knowledge, CDM is the only eCRF system integrating data preprocessing and machine-learning libraries. This integration improves the efficiency of the clinical diagnostic model research workflow. Moreover, by simplifying the generation of learning curves, CDM enables study coordinators to assess more accurately when data collection can be terminated, resulting in better models or lower patient recruitment costs.
High energy Coulomb-scattered electrons for relativistic particle beams and diagnostics
Thieberger, P.; Altinbas, Z.; Carlson, C.; ...
2016-03-29
A new system used for monitoring energetic Coulomb-scattered electrons as the main diagnostic for accurately aligning the electron and ion beams in the new Relativistic Heavy Ion Collider (RHIC) electron lenses is described in detail. The theory of electron scattering from relativistic ions is developed and applied to the design and implementation of the system used to achieve and maintain the alignment. Commissioning with gold and 3He beams is then described as well as the successful utilization of the new system during the 2015 RHIC polarized proton run. Systematic errors of the new method are then estimated. Lastly, some possiblemore » future applications of Coulomb-scattered electrons for beam diagnostics are briefly discussed.« less
Zurovac, Dejan; Larson, Bruce A.; Skarbinski, Jacek; Slutsker, Laurence; Snow, Robert W.; Hamel, Mary J.
2008-01-01
Using data on clinical practices for outpatients 5 years and older, test accuracy, and malaria prevalence, we model financial and clinical implications of malaria rapid diagnostic tests (RDTs) under the new artemether-lumefantrine (AL) treatment policy in one high and one low malaria prevalence district in Kenya. In the high transmission district, RDTs as actually used would improve malaria treatment (61% less over-treatment but 8% more under-treatment) and lower costs (21% less). Nonetheless, the majority of patients with malaria would not be correctly treated with AL. In the low transmission district, especially because the treatment policy was new and AL was not widely used, RDTs as actually used would yield a minor reduction in under-treatment errors (36% less but the base is small) with 41% higher costs. In both districts, adherence to revised clinical practices with RDTs has the potential to further decrease treatment errors with acceptable costs. PMID:18541764
Panayides, Andreas; Antoniou, Zinonas C; Mylonas, Yiannos; Pattichis, Marios S; Pitsillides, Andreas; Pattichis, Constantinos S
2013-05-01
In this study, we describe an effective video communication framework for the wireless transmission of H.264/AVC medical ultrasound video over mobile WiMAX networks. Medical ultrasound video is encoded using diagnostically-driven, error resilient encoding, where quantization levels are varied as a function of the diagnostic significance of each image region. We demonstrate how our proposed system allows for the transmission of high-resolution clinical video that is encoded at the clinical acquisition resolution and can then be decoded with low-delay. To validate performance, we perform OPNET simulations of mobile WiMAX Medium Access Control (MAC) and Physical (PHY) layers characteristics that include service prioritization classes, different modulation and coding schemes, fading channels conditions, and mobility. We encode the medical ultrasound videos at the 4CIF (704 × 576) resolution that can accommodate clinical acquisition that is typically performed at lower resolutions. Video quality assessment is based on both clinical (subjective) and objective evaluations.
Metabolic emergencies and the emergency physician.
Fletcher, Janice Mary
2016-02-01
Fifty percent of inborn errors of metabolism are present in later childhood and adulthood, with crises commonly precipitated by minor viral illnesses or increased protein ingestion. Many physicians only consider IEM after more common conditions (such as sepsis) have been considered. In view of the large number of inborn errors, it might appear that their diagnosis requires precise knowledge of a large number of biochemical pathways and their interrelationship. As a matter of fact, an adequate diagnostic approach can be based on the proper use of only a few screening tests. A detailed history of antecedent events, together with these simple screening tests, can be diagnostic, leading to life-saving, targeted treatments for many disorders. Unrecognised, IEM can lead to significant mortality and morbidity. Advice is available 24/7 through the metabolic service based at the major paediatric hospital in each state and Starship Children's Health in New Zealand. © 2016 The Author. Journal of Paediatrics and Child Health © 2016 Paediatrics and Child Health Division (Royal Australasian College of Physicians).
Gear noise, vibration, and diagnostic studies at NASA Lewis Research Center
NASA Technical Reports Server (NTRS)
Zakrajsek, James J.; Oswald, Fred B.; Townsend, Dennis P.; Coy, John J.
1990-01-01
The NASA Lewis Research Center and the U.S. Army Aviation Systems Command are involved in a joint research program to advance the technology of rotorcraft transmissions. This program consists of analytical as well as experimental efforts to achieve the overall goals of reducing weight, noise, and vibration, while increasing life and reliability. Recent analytical activities are highlighted in the areas of gear noise, vibration, and diagnostics performed in-house and through NASA and U.S. Army sponsored grants and contracts. These activities include studies of gear tooth profiles to reduce transmission error and vibration as well as gear housing and rotordynamic modeling to reduce structural vibration transmission and noise radiation, and basic research into current gear failure diagnostic methodologies. Results of these activities are presented along with an overview of near term research plans in the gear noise, vibration, and diagnostics area.
Zenina, L P; Godkov, M A
2013-08-01
The article presents the experience of implementation of system of quality management into the practice of multi-field laboratory of emergency medical care hospital. The analysis of laboratory errors is applied and the modes of their prevention are demonstrated. The ratings of department of laboratory diagnostic of the N. V. Sklifosofskiy research institute of emergency care in the program EQAS (USA) Monthly Clinical Chemistry from 2007 are presented. The implementation of the system of quality management of laboratory analysis into department of laboratory diagnostic made it possible to support physicians of clinical departments with reliable information. The confidence of clinicians to received results increased. The effectiveness of laboratory diagnostic increased due to lowering costs of analysis without negative impact to quality of curative process.
Diagnostics of Robust Growth Curve Modeling Using Student's "t" Distribution
ERIC Educational Resources Information Center
Tong, Xin; Zhang, Zhiyong
2012-01-01
Growth curve models with different types of distributions of random effects and of intraindividual measurement errors for robust analysis are compared. After demonstrating the influence of distribution specification on parameter estimation, 3 methods for diagnosing the distributions for both random effects and intraindividual measurement errors…
49 CFR 395.16 - Electronic on-board recording devices.
Code of Federal Regulations, 2010 CFR
2010-10-01
... transfer through wired and wireless methods to portable computers used by roadside safety assurance... the results of power-on self-tests and diagnostic error codes. (e) Date and time. (1) The date and... part. Wireless communication information interchange methods must comply with the requirements of the...
Maximum likelihood phase-retrieval algorithm: applications.
Nahrstedt, D A; Southwell, W H
1984-12-01
The maximum likelihood estimator approach is shown to be effective in determining the wave front aberration in systems involving laser and flow field diagnostics and optical testing. The robustness of the algorithm enables convergence even in cases of severe wave front error and real, nonsymmetrical, obscured amplitude distributions.
21 CFR 886.1770 - Manual refractor.
Code of Federal Regulations, 2010 CFR
2010-04-01
... DEVICES OPHTHALMIC DEVICES Diagnostic Devices § 886.1770 Manual refractor. (a) Identification. A manual refractor is a device that is a set of lenses of varous dioptric powers intended to measure the refractive error of the eye. (b) Classification. Class I (general controls). The device is exempt from the...
Improved assessment of gross and net primary productivity of Canada's landmass
NASA Astrophysics Data System (ADS)
Gonsamo, Alemu; Chen, Jing M.; Price, David T.; Kurz, Werner A.; Liu, Jane; Boisvenue, Céline; Hember, Robbie A.; Wu, Chaoyang; Chang, Kuo-Hsien
2013-12-01
assess Canada's gross primary productivity (GPP) and net primary productivity (NPP) using boreal ecosystem productivity simulator (BEPS) at 250 m spatial resolution with improved input parameter and driver fields and phenology and nutrient release parameterization schemes. BEPS is a process-based two-leaf enzyme kinetic terrestrial ecosystem model designed to simulate energy, water, and carbon (C) fluxes using spatial data sets of meteorology, remotely sensed land surface variables, soil properties, and photosynthesis and respiration rate parameters. Two improved key land surface variables, leaf area index (LAI) and land cover type, are derived at 250 m from Moderate Resolution Imaging Spectroradiometer sensor. For diagnostic error assessment, we use nine forest flux tower sites where all measured C flux, meteorology, and ancillary data sets are available. The errors due to input drivers and parameters are then independently corrected for Canada-wide GPP and NPP simulations. The optimized LAI use, for example, reduced the absolute bias in GPP from 20.7% to 1.1% for hourly BEPS simulations. Following the error diagnostics and corrections, daily GPP and NPP are simulated over Canada at 250 m spatial resolution, the highest resolution simulation yet for the country or any other comparable region. Total NPP (GPP) for Canada's land area was 1.27 (2.68) Pg C for 2008, with forests contributing 1.02 (2.2) Pg C. The annual comparisons between measured and simulated GPP show that the mean differences are not statistically significant (p > 0.05, paired t test). The main BEPS simulation error sources are from the driver fields.
Is forceps more useful than visualization for measurement of colon polyp size?
Kim, Jae Hyun; Park, Seun Ja; Lee, Jong Hoon; Kim, Tae Oh; Kim, Hyun Jin; Kim, Hyung Wook; Lee, Sang Heon; Baek, Dong Hoon; (BIGS), Busan Ulsan Gyeongnam Intestinal Study Group Society
2016-01-01
AIM: To identify whether the forceps estimation is more useful than visual estimation in the measurement of colon polyp size. METHODS: We recorded colonoscopy video clips that included scenes visualizing the polyp and scenes using open biopsy forceps in association with the polyp, which were used for an exam. A total of 40 endoscopists from the Busan Ulsan Gyeongnam Intestinal Study Group Society (BIGS) participated in this study. Participants watched 40 pairs of video clips of the scenes for visual estimation and forceps estimation, and wrote down the estimated polyp size on the exam paper. When analyzing the results of the exam, we assessed inter-observer differences, diagnostic accuracy, and error range in the measurement of the polyp size. RESULTS: The overall intra-class correlation coefficients (ICC) of inter-observer agreement for forceps estimation and visual estimation were 0.804 (95%CI: 0.731-0.873, P < 0.001) and 0.743 (95%CI: 0.656-0.828, P < 0.001), respectively. The ICCs of each group for forceps estimation were higher than those for visual estimation (Beginner group, 0.761 vs 0.693; Expert group, 0.887 vs 0.840, respectively). The overall diagnostic accuracy for visual estimation was 0.639 and for forceps estimation was 0.754 (P < 0.001). In the beginner group and the expert group, the diagnostic accuracy for the forceps estimation was significantly higher than that of the visual estimation (Beginner group, 0.734 vs 0.613, P < 0.001; Expert group, 0.784 vs 0.680, P < 0.001, respectively). The overall error range for visual estimation and forceps estimation were 1.48 ± 1.18 and 1.20 ± 1.10, respectively (P < 0.001). The error ranges of each group for forceps estimation were significantly smaller than those for visual estimation (Beginner group, 1.38 ± 1.08 vs 1.68 ± 1.30, P < 0.001; Expert group, 1.12 ± 1.11 vs 1.42 ± 1.11, P < 0.001, respectively). CONCLUSION: Application of the open biopsy forceps method when measuring colon polyp size could help reduce inter-observer differences and error rates. PMID:27003999
Is forceps more useful than visualization for measurement of colon polyp size?
Kim, Jae Hyun; Park, Seun Ja; Lee, Jong Hoon; Kim, Tae Oh; Kim, Hyun Jin; Kim, Hyung Wook; Lee, Sang Heon; Baek, Dong Hoon; Bigs, Busan Ulsan Gyeongnam Intestinal Study Group Society
2016-03-21
To identify whether the forceps estimation is more useful than visual estimation in the measurement of colon polyp size. We recorded colonoscopy video clips that included scenes visualizing the polyp and scenes using open biopsy forceps in association with the polyp, which were used for an exam. A total of 40 endoscopists from the Busan Ulsan Gyeongnam Intestinal Study Group Society (BIGS) participated in this study. Participants watched 40 pairs of video clips of the scenes for visual estimation and forceps estimation, and wrote down the estimated polyp size on the exam paper. When analyzing the results of the exam, we assessed inter-observer differences, diagnostic accuracy, and error range in the measurement of the polyp size. The overall intra-class correlation coefficients (ICC) of inter-observer agreement for forceps estimation and visual estimation were 0.804 (95%CI: 0.731-0.873, P < 0.001) and 0.743 (95%CI: 0.656-0.828, P < 0.001), respectively. The ICCs of each group for forceps estimation were higher than those for visual estimation (Beginner group, 0.761 vs 0.693; Expert group, 0.887 vs 0.840, respectively). The overall diagnostic accuracy for visual estimation was 0.639 and for forceps estimation was 0.754 (P < 0.001). In the beginner group and the expert group, the diagnostic accuracy for the forceps estimation was significantly higher than that of the visual estimation (Beginner group, 0.734 vs 0.613, P < 0.001; Expert group, 0.784 vs 0.680, P < 0.001, respectively). The overall error range for visual estimation and forceps estimation were 1.48 ± 1.18 and 1.20 ± 1.10, respectively (P < 0.001). The error ranges of each group for forceps estimation were significantly smaller than those for visual estimation (Beginner group, 1.38 ± 1.08 vs 1.68 ± 1.30, P < 0.001; Expert group, 1.12 ± 1.11 vs 1.42 ± 1.11, P < 0.001, respectively). Application of the open biopsy forceps method when measuring colon polyp size could help reduce inter-observer differences and error rates.
Armstrong, Bonnie; Spaniol, Julia; Persaud, Nav
2018-02-13
Clinicians often overestimate the probability of a disease given a positive test result (positive predictive value; PPV) and the probability of no disease given a negative test result (negative predictive value; NPV). The purpose of this study was to investigate whether experiencing simulated patient cases (ie, an 'experience format') would promote more accurate PPV and NPV estimates compared with a numerical format. Participants were presented with information about three diagnostic tests for the same fictitious disease and were asked to estimate the PPV and NPV of each test. Tests varied with respect to sensitivity and specificity. Information about each test was presented once in the numerical format and once in the experience format. The study used a 2 (format: numerical vs experience) × 3 (diagnostic test: gold standard vs low sensitivity vs low specificity) within-subjects design. The study was completed online, via Qualtrics (Provo, Utah, USA). 50 physicians (12 clinicians and 38 residents) from the Department of Family and Community Medicine at St Michael's Hospital in Toronto, Canada, completed the study. All participants had completed at least 1 year of residency. Estimation accuracy was quantified by the mean absolute error (MAE; absolute difference between estimate and true predictive value). PPV estimation errors were larger in the numerical format (MAE=32.6%, 95% CI 26.8% to 38.4%) compared with the experience format (MAE=15.9%, 95% CI 11.8% to 20.0%, d =0.697, P<0.001). Likewise, NPV estimation errors were larger in the numerical format (MAE=24.4%, 95% CI 14.5% to 34.3%) than in the experience format (MAE=11.0%, 95% CI 6.5% to 15.5%, d =0.303, P=0.015). Exposure to simulated patient cases promotes accurate estimation of predictive values in clinicians. This finding carries implications for diagnostic training and practice. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Accuracy of vaginal symptom self-diagnosis algorithms for deployed military women.
Ryan-Wenger, Nancy A; Neal, Jeremy L; Jones, Ashley S; Lowe, Nancy K
2010-01-01
Deployed military women have an increased risk for development of vaginitis due to extreme temperatures, primitive sanitation, hygiene and laundry facilities, and unavailable or unacceptable healthcare resources. The Women in the Military Self-Diagnosis (WMSD) and treatment kit was developed as a field-expedient solution to this problem. The primary study aims were to evaluate the accuracy of women's self-diagnosis of vaginal symptoms and eight diagnostic algorithms and to predict potential self-medication omission and commission error rates. Participants included 546 active duty, deployable Army (43.3%) and Navy (53.6%) women with vaginal symptoms who sought healthcare at troop medical clinics on base.In the clinic lavatory, women conducted a self-diagnosis using a sterile cotton swab to obtain vaginal fluid, a FemExam card to measure positive or negative pH and amines, and the investigator-developed WMSD Decision-Making Guide. Potential self-diagnoses were "bacterial infection" (bacterial vaginosis [BV] and/or trichomonas vaginitis [TV]), "yeast infection" (candida vaginitis [CV]), "no infection/normal," or "unclear." The Affirm VPIII laboratory reference standard was used to detect clinically significant amounts of vaginal fluid DNA for organisms associated with BV, TV, and CV. Women's self-diagnostic accuracy was 56% for BV/TV and 69.2% for CV. False-positives would have led to a self-medication commission error rate of 20.3% for BV/TV and 8% for CV. Potential self-medication omission error rates due to false-negatives were 23.7% for BV/TV and 24.8% for CV. The positive predictive value of diagnostic algorithms ranged from 0% to 78.1% for BV/TV and 41.7% for CV. The algorithms were based on clinical diagnostic standards. The nonspecific nature of vaginal symptoms, mixed infections, and a faulty device intended to measure vaginal pH and amines explain why none of the algorithms reached the goal of 95% accuracy. The next prototype of the WMSD kit will not include nonspecific vaginal signs and symptoms in favor of recently available point-of-care devices that identify antigens or enzymes of the causative BV, TV, and CV organisms.
Summertime concentrations of fine particulate carbon in the southeastern United States are consistently underestimated by air quality models. In an effort to understand the cause of this error, the Community Multiscale Air Quality (CMAQ) model is instrumented to track primary org...
[Description of clinical thinking by the dual-process theory].
Peña G, Luis
2012-06-01
Clinical thinking is a very complex process that can be described by the dual-process theory, it has an intuitive part (that recognizes patterns) and an analytical part (that tests hypotheses). It is vulnerable to cognitive bias that professionals must be aware of, to minimize diagnostic errors.
Teaching Statistics with Minitab II.
ERIC Educational Resources Information Center
Ryan, T. A., Jr.; And Others
Minitab is a statistical computing system which uses simple language, produces clear output, and keeps track of bookkeeping automatically. Error checking with English diagnostics and inclusion of several default options help to facilitate use of the system by students. Minitab II is an improved and expanded version of the original Minitab which…
Feasibility of Self-Reflection as a Tool to Balance Clinical Reasoning Strategies
ERIC Educational Resources Information Center
Sibbald, Matthew; de Bruin, Anique B. H.
2012-01-01
Clinicians are believed to use two predominant reasoning strategies: system 1 based pattern recognition, and system 2 based analytical reasoning. Balancing these cognitive reasoning strategies is widely believed to reduce diagnostic error. However, clinicians approach different problems with different reasoning strategies. This study explores…
Interlanguage Variation: A Point Missed?
ERIC Educational Resources Information Center
Tice, Bradley Scott
A study investigated patterns in phonological errors occurring in the speaker's second language in both formal and informal speaking situations. Subjects were three adult learners of English as a second language, including a native Spanish-speaker and two Asians. Their speech was recorded during diagnostic testing (formal speech) and in everyday…
Investigation of an Error Theory for Conjoint Measurement Methodology.
1983-05-01
1ybren, 1982; Srinivasan and Shocker, 1973a, 1973b; Ullrich =d Cumins , 1973; Takane, Young, and de Leeui, 190C; Yount,, 1972’. & OEM...procedures as a diagnostic tool. Specifically, they used the oompted STRESS - value and a measure of fit they called PRECAP that could be obtained
The School-Based Multidisciplinary Team and Nondiscriminatory Assessment.
ERIC Educational Resources Information Center
Pfeiffer, Steven I.
The potential of multidisciplinary teams to control for possible errors in diagnosis, classification, and placement and to provide a vehicle for ensuring effective outcomes of diagnostic practices is illustrated. The present functions of the school-based multidisciplinary team (also called, for example, assessment team, child study team, placement…
Strategies for Teaching Fractions: Using Error Analysis for Intervention and Assessment
ERIC Educational Resources Information Center
Spangler, David B.
2011-01-01
Many students struggle with fractions and must understand them before learning higher-level math. Veteran educator David B. Spangler provides research-based tools that are aligned with NCTM and Common Core State Standards. He outlines powerful diagnostic methods for analyzing student work and providing timely, specific, and meaningful…
78 FR 9060 - Request for Nominations for Voting Members on Public Advisory Panels or Committees
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-07
... diagnostic assays, e.g., hepatologists; molecular biologists. Molecular and Clinical 2 June 1, 2013. Genetics.... Individuals with training in inborn errors of metabolism, biochemical and/or molecular genetics, population genetics, epidemiology and related statistical training, and clinical molecular genetics testing (e.g...
Cross-Proportions: A Conceptual Method for Developing Quantitative Problem-Solving Skills
ERIC Educational Resources Information Center
Cook, Elzbieta; Cook, Stephen L.
2005-01-01
The cross-proportion method allows both the instructor and the student to easily determine where an error is made during problem solving. The C-P method supports a strong cognitive foundation upon which students can develop other diagnostic methods as they advance in chemistry and scientific careers.
Decision-Making Accuracy of CBM Progress-Monitoring Data
ERIC Educational Resources Information Center
Hintze, John M.; Wells, Craig S.; Marcotte, Amanda M.; Solomon, Benjamin G.
2018-01-01
This study examined the diagnostic accuracy associated with decision making as is typically conducted with curriculum-based measurement (CBM) approaches to progress monitoring. Using previously published estimates of the standard errors of estimate associated with CBM, 20,000 progress-monitoring data sets were simulated to model student reading…
A Report on IRI Scoring and Interpretation.
ERIC Educational Resources Information Center
Anderson, Betty
Noting that most classroom teachers use informal reading inventories (IRI) as diagnostic instruments, a study examined what oral reading accuracy level is most appropriate for the instructional level and whether repetitions should be counted as oral reading errors. Randomly selected students from the second through fifth grades at two elementary…
Sleep Patterns and Its Relationship to Schooling and Family.
ERIC Educational Resources Information Center
Jones, Franklin Ross
Diagnostic classifications of sleep and arousal disorders have been categorized in four major areas: disorders of initiating and maintaining sleep, disorders of excessive sleepiness, disorders of the sleep/wake pattern, and the parasomnias such as sleep walking, talking, and night errors. Another nomenclature classifies them into DIMS (disorders…
Asymmetries in Predictive and Diagnostic Reasoning
ERIC Educational Resources Information Center
Fernbach, Philip M.; Darlow, Adam; Sloman, Steven A.
2011-01-01
In this article, we address the apparent discrepancy between causal Bayes net theories of cognition, which posit that judgments of uncertainty are generated from causal beliefs in a way that respects the norms of probability, and evidence that probability judgments based on causal beliefs are systematically in error. One purported source of bias…
Periodic Application of Concurrent Error Detection in Processor Array Architectures. PhD. Thesis -
NASA Technical Reports Server (NTRS)
Chen, Paul Peichuan
1993-01-01
Processor arrays can provide an attractive architecture for some applications. Featuring modularity, regular interconnection and high parallelism, such arrays are well-suited for VLSI/WSI implementations, and applications with high computational requirements, such as real-time signal processing. Preserving the integrity of results can be of paramount importance for certain applications. In these cases, fault tolerance should be used to ensure reliable delivery of a system's service. One aspect of fault tolerance is the detection of errors caused by faults. Concurrent error detection (CED) techniques offer the advantage that transient and intermittent faults may be detected with greater probability than with off-line diagnostic tests. Applying time-redundant CED techniques can reduce hardware redundancy costs. However, most time-redundant CED techniques degrade a system's performance.
Enhancements to the timing of the OMEGA laser system to improve illumination uniformity
NASA Astrophysics Data System (ADS)
Donaldson, W. R.; Katz, J.; Kosc, T. Z.; Kelly, J. H.; Hill, E. M.; Bahr, R. E.
2016-09-01
Two diagnostics have been developed to improve the uniformity on the OMEGA Laser System, which is used for inertial confinement fusion (ICF) research. The first diagnostic measures the phase of an optical modulator (used for the spectral dispersion technique employed on OMEGA to enhance spatial smoothing), which adds bandwidth to the optical pulse. Setting this phase precisely is required to reduce pointing errors. The second diagnostic ensures that the arrival times of all the beams are synchronized. The arrival of each of the 60 OMEGA beams is measured by placing a 1-mm diffusing sphere at target chamber center. By comparing the arrival time of each beam with respect to a reference pulse, the measured timing spread of the OMEGA Laser System is now 3.8 ps.
A case of poor substructure diagnostics
NASA Technical Reports Server (NTRS)
Butler, Thomas G.
1992-01-01
The NASTRAN Manuals in the substructuring area are all geared toward instant success, but the solution paths are fraught with many traps for human error. Thus, the probability of suffering a fatal abort is high. In such circumstances, the necessity for diagnostics that are user friendly is paramount. This paper is written in the spirit of improving the diagnostics as well as the documentation in one area where the author felt he was backed into a blind corner as a result of his having committed a data oversight. This topic is aired by referring to an analysis of a particular structure. The structure, under discussion, used a number of local coordinate systems that simplified the preparation of input data. The principle features of this problem are introduced by reference to a series of figures.
NASA Astrophysics Data System (ADS)
Gautam, Ghaneshwar; Surmick, David M.; Parigger, Christian G.
2015-07-01
In this letter, we present a brief comment regarding the recently published paper by Ivković et al., J Quant Spectrosc Radiat Transf 2015;154:1-8. Reference is made to previous experimental results to indicate that self absorption must have occurred; however, when carefully considering error propagation, both widths and peak-separation predict electron densities within the error margins. Yet the diagnosis method and the presented details on the use of the hydrogen beta peak separation are viewed as a welcomed contribution in studies of laser-induced plasma.
2D electron density profile measurement in tokamak by laser-accelerated ion-beam probe.
Chen, Y H; Yang, X Y; Lin, C; Wang, L; Xu, M; Wang, X G; Xiao, C J
2014-11-01
A new concept of Heavy Ion Beam Probe (HIBP) diagnostic has been proposed, of which the key is to replace the electrostatic accelerator of traditional HIBP by a laser-driven ion accelerator. Due to the large energy spread of ions, the laser-accelerated HIBP can measure the two-dimensional (2D) electron density profile of tokamak plasma. In a preliminary simulation, a 2D density profile was reconstructed with a spatial resolution of about 2 cm, and with the error below 15% in the core region. Diagnostics of 2D density fluctuation is also discussed.
The role of MRI in musculoskeletal practice: a clinical perspective
Dean Deyle, Gail
2011-01-01
This clinical perspective presents an overview of current and potential uses for magnetic resonance imaging (MRI) in musculoskeletal practice. Clinical practice guidelines and current evidence for improved outcomes will help providers determine the situations when an MRI is indicated. The advanced competency standard of examination used by physical therapists will be helpful to prevent overuse of musculoskeletal imaging, reduce diagnostic errors, and provide the appropriate clinical context to pathology revealed on MRI. Physical therapists are diagnostically accurate and appropriately conservative in their use of MRI consistent with evidence-based principles of diagnosis and screening. PMID:22851878
Three-dimensional assessment of facial asymmetry: A systematic review.
Akhil, Gopi; Senthil Kumar, Kullampalayam Palanisamy; Raja, Subramani; Janardhanan, Kumaresan
2015-08-01
For patients with facial asymmetry, complete and precise diagnosis, and surgical treatments to correct the underlying cause of the asymmetry are significant. Conventional diagnostic radiographs (submento-vertex projections, posteroanterior radiography) have limitations in asymmetry diagnosis due to two-dimensional assessments of three-dimensional (3D) images. The advent of 3D images has greatly reduced the magnification and projection errors that are common in conventional radiographs making it as a precise diagnostic aid for assessment of facial asymmetry. Thus, this article attempts to review the newly introduced 3D tools in the diagnosis of more complex facial asymmetries.
Macrae, Toby; Tyler, Ann A
2014-10-01
The authors compared preschool children with co-occurring speech sound disorder (SSD) and language impairment (LI) to children with SSD only in their numbers and types of speech sound errors. In this post hoc quasi-experimental study, independent samples t tests were used to compare the groups in the standard score from different tests of articulation/phonology, percent consonants correct, and the number of omission, substitution, distortion, typical, and atypical error patterns used in the production of different wordlists that had similar levels of phonetic and structural complexity. In comparison with children with SSD only, children with SSD and LI used similar numbers but different types of errors, including more omission patterns ( p < .001, d = 1.55) and fewer distortion patterns ( p = .022, d = 1.03). There were no significant differences in substitution, typical, and atypical error pattern use. Frequent omission error pattern use may reflect a more compromised linguistic system characterized by absent phonological representations for target sounds (see Shriberg et al., 2005). Research is required to examine the diagnostic potential of early frequent omission error pattern use in predicting later diagnoses of co-occurring SSD and LI and/or reading problems.
Spatial calibration of a tokamak neutral beam diagnostic using in situ neutral beam emission
Chrystal, Colin; Burrell, Keith H.; Grierson, Brian A.; ...
2015-10-20
Neutral beam injection is used in tokamaks to heat, apply torque, drive non-inductive current, and diagnose plasmas. Neutral beam diagnostics need accurate spatial calibrations to benefit from the measurement localization provided by the neutral beam. A new technique has been developed that uses in-situ measurements of neutral beam emission to determine the spatial location of the beam and the associated diagnostic views. This technique was developed to improve the charge exchange recombination diagnostic (CER) at the DIII-D tokamak and uses measurements of the Doppler shift and Stark splitting of neutral beam emission made by that diagnostic. These measurements contain informationmore » about the geometric relation between the diagnostic views and the neutral beams when they are injecting power. This information is combined with standard spatial calibration measurements to create an integrated spatial calibration that provides a more complete description of the neutral beam-CER system. The integrated spatial calibration results are very similar to the standard calibration results and derived quantities from CER measurements are unchanged within their measurement errors. Lastly, the methods developed to perform the integrated spatial calibration could be useful for tokamaks with limited physical access.« less
Spatial calibration of a tokamak neutral beam diagnostic using in situ neutral beam emission
NASA Astrophysics Data System (ADS)
Chrystal, C.; Burrell, K. H.; Grierson, B. A.; Pace, D. C.
2015-10-01
Neutral beam injection is used in tokamaks to heat, apply torque, drive non-inductive current, and diagnose plasmas. Neutral beam diagnostics need accurate spatial calibrations to benefit from the measurement localization provided by the neutral beam. A new technique has been developed that uses in situ measurements of neutral beam emission to determine the spatial location of the beam and the associated diagnostic views. This technique was developed to improve the charge exchange recombination (CER) diagnostic at the DIII-D tokamak and uses measurements of the Doppler shift and Stark splitting of neutral beam emission made by that diagnostic. These measurements contain information about the geometric relation between the diagnostic views and the neutral beams when they are injecting power. This information is combined with standard spatial calibration measurements to create an integrated spatial calibration that provides a more complete description of the neutral beam-CER system. The integrated spatial calibration results are very similar to the standard calibration results and derived quantities from CER measurements are unchanged within their measurement errors. The methods developed to perform the integrated spatial calibration could be useful for tokamaks with limited physical access.
ReQON: a Bioconductor package for recalibrating quality scores from next-generation sequencing data
2012-01-01
Background Next-generation sequencing technologies have become important tools for genome-wide studies. However, the quality scores that are assigned to each base have been shown to be inaccurate. If the quality scores are used in downstream analyses, these inaccuracies can have a significant impact on the results. Results Here we present ReQON, a tool that recalibrates the base quality scores from an input BAM file of aligned sequencing data using logistic regression. ReQON also generates diagnostic plots showing the effectiveness of the recalibration. We show that ReQON produces quality scores that are both more accurate, in the sense that they more closely correspond to the probability of a sequencing error, and do a better job of discriminating between sequencing errors and non-errors than the original quality scores. We also compare ReQON to other available recalibration tools and show that ReQON is less biased and performs favorably in terms of quality score accuracy. Conclusion ReQON is an open source software package, written in R and available through Bioconductor, for recalibrating base quality scores for next-generation sequencing data. ReQON produces a new BAM file with more accurate quality scores, which can improve the results of downstream analysis, and produces several diagnostic plots showing the effectiveness of the recalibration. PMID:22946927
Sandblom, Gabriel; Granroth, Sofie; Rasmussen, Ib Christian
2008-01-01
Although numerous tumour markers are available for periampullary tumours, including pancreatic cancer, their specificity and sensitivity have been questioned. To assess the diagnostic and prognostic values of tissue polypeptide specific antigen (TPS), carbohydrate antigen 19-9 (CA 19-9), vascular endothelial growth factor (VEGF-A), and carcinoembryonic antigen (CEA) we took serum samples in 56 patients with mass lesions in the pancreatic head. Among these patients, further investigations revealed pancreatic cancer in 20 patients, other malignant diseases in 12 and benign conditions in 24. Median CEA in all patients was 3.4 microg/L (range 0.5-585.0), median CA 19-9 was105 kU/L (range 0.6-1 300 00), median TPS 123.5 U/L (range 15.0-3350) and median VEGF-A 132.5 ng/L (range 60.0-4317). Area under the curve was 0.747, standard error (standard error [SE] =0.075) for CEA, 0.716 (SE=0.078) for CA 19-9 and 0.822 (SE=0.086) for TPS in ROC plots based on the ability of the tumours to distinguish between benign and malignant conditions. None of the markers significantly predicted survival in the subgroup of patients with pancreatic cancer. Our study shows that the markers may be used as fairly reliable diagnostic tools, but cannot be used to predict survival.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, D P; Ritts, W D; Wharton, S
2009-02-26
The combination of satellite remote sensing and carbon cycle models provides an opportunity for regional to global scale monitoring of terrestrial gross primary production, ecosystem respiration, and net ecosystem production. FPAR (the fraction of photosynthetically active radiation absorbed by the plant canopy) is a critical input to diagnostic models, however little is known about the relative effectiveness of FPAR products from different satellite sensors nor about the sensitivity of flux estimates to different parameterization approaches. In this study, we used multiyear observations of carbon flux at four eddy covariance flux tower sites within the conifer biome to evaluate these factors.more » FPAR products from the MODIS and SeaWiFS sensors, and the effects of single site vs. cross-site parameter optimization were tested with the CFLUX model. The SeaWiFs FPAR product showed greater dynamic range across sites and resulted in slightly reduced flux estimation errors relative to the MODIS product when using cross-site optimization. With site-specific parameter optimization, the flux model was effective in capturing seasonal and interannual variation in the carbon fluxes at these sites. The cross-site prediction errors were lower when using parameters from a cross-site optimization compared to parameter sets from optimization at single sites. These results support the practice of multisite optimization within a biome for parameterization of diagnostic carbon flux models.« less
Uncertainty analysis technique for OMEGA Dante measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
May, M. J.; Widmann, K.; Sorce, C.
2010-10-15
The Dante is an 18 channel x-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g., hohlraums, etc.) at x-ray energies between 50 eV and 10 keV. It is a main diagnostic installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The absolute flux is determined from the photometric calibration of the x-ray diodes, filters and mirrors, and an unfold algorithm. Understanding the errors on this absolute measurement is critical for understanding hohlraum energetic physics. We present a new method for quantifying the uncertainties on the determinedmore » flux using a Monte Carlo parameter variation technique. This technique combines the uncertainties in both the unfold algorithm and the error from the absolute calibration of each channel into a one sigma Gaussian error function. One thousand test voltage sets are created using these error functions and processed by the unfold algorithm to produce individual spectra and fluxes. Statistical methods are applied to the resultant set of fluxes to estimate error bars on the measurements.« less
Uncertainty Analysis Technique for OMEGA Dante Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
May, M J; Widmann, K; Sorce, C
2010-05-07
The Dante is an 18 channel X-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g. hohlraums, etc.) at X-ray energies between 50 eV to 10 keV. It is a main diagnostics installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The absolute flux is determined from the photometric calibration of the X-ray diodes, filters and mirrors and an unfold algorithm. Understanding the errors on this absolute measurement is critical for understanding hohlraum energetic physics. We present a new method for quantifying the uncertainties on the determinedmore » flux using a Monte-Carlo parameter variation technique. This technique combines the uncertainties in both the unfold algorithm and the error from the absolute calibration of each channel into a one sigma Gaussian error function. One thousand test voltage sets are created using these error functions and processed by the unfold algorithm to produce individual spectra and fluxes. Statistical methods are applied to the resultant set of fluxes to estimate error bars on the measurements.« less
The pursuit of better diagnostic performance: a human factors perspective.
Henriksen, Kerm; Brady, Jeff
2013-10-01
Despite the relatively slow start in treating diagnostic error as an amenable research topic at the beginning of the patient safety movement, interest has steadily increased over the past few years in the form of solicitations for research, regularly scheduled conferences, an expanding literature and even a new professional society. Yet improving diagnostic performance increasingly is recognised as a multifaceted challenge. With the aid of a human factors perspective, this paper addresses a few of these challenges, including questions that focus on who owns the problem, treating cognitive and system shortcomings as separate issues, why knowledge in the head is not enough, and what we are learning from health information technology (IT) and the use of checklists. To encourage empirical testing of interventions that aim to improve diagnostic performance, a systems engineering approach making use of rapid-cycle prototyping and simulation is proposed. To gain a fuller understanding of the complexity of the sociotechnical space where diagnostic work is performed, a final note calls for the formation of substantive partnerships with those in disciplines beyond the clinical domain.
An ROC-type measure of diagnostic accuracy when the gold standard is continuous-scale.
Obuchowski, Nancy A
2006-02-15
ROC curves and summary measures of accuracy derived from them, such as the area under the ROC curve, have become the standard for describing and comparing the accuracy of diagnostic tests. Methods for estimating ROC curves rely on the existence of a gold standard which dichotomizes patients into disease present or absent. There are, however, many examples of diagnostic tests whose gold standards are not binary-scale, but rather continuous-scale. Unnatural dichotomization of these gold standards leads to bias and inconsistency in estimates of diagnostic accuracy. In this paper, we propose a non-parametric estimator of diagnostic test accuracy which does not require dichotomization of the gold standard. This estimator has an interpretation analogous to the area under the ROC curve. We propose a confidence interval for test accuracy and a statistical test for comparing accuracies of tests from paired designs. We compare the performance (i.e. CI coverage, type I error rate, power) of the proposed methods with several alternatives. An example is presented where the accuracies of two quick blood tests for measuring serum iron concentrations are estimated and compared.
Mercan, Ezgi; Aksoy, Selim; Shapiro, Linda G; Weaver, Donald L; Brunyé, Tad T; Elmore, Joann G
2016-08-01
Whole slide digital imaging technology enables researchers to study pathologists' interpretive behavior as they view digital slides and gain new understanding of the diagnostic medical decision-making process. In this study, we propose a simple yet important analysis to extract diagnostically relevant regions of interest (ROIs) from tracking records using only pathologists' actions as they viewed biopsy specimens in the whole slide digital imaging format (zooming, panning, and fixating). We use these extracted regions in a visual bag-of-words model based on color and texture features to predict diagnostically relevant ROIs on whole slide images. Using a logistic regression classifier in a cross-validation setting on 240 digital breast biopsy slides and viewport tracking logs of three expert pathologists, we produce probability maps that show 74 % overlap with the actual regions at which pathologists looked. We compare different bag-of-words models by changing dictionary size, visual word definition (patches vs. superpixels), and training data (automatically extracted ROIs vs. manually marked ROIs). This study is a first step in understanding the scanning behaviors of pathologists and the underlying reasons for diagnostic errors.
Woolf, Steven H.; Kuzel, Anton J.; Dovey, Susan M.; Phillips, Robert L.
2004-01-01
BACKGROUND Notions about the most common errors in medicine currently rest on conjecture and weak epidemiologic evidence. We sought to determine whether cascade analysis is of value in clarifying the epidemiology and causes of errors and whether physician reports are sensitive to the impact of errors on patients. METHODS Eighteen US family physicians participating in a 6-country international study filed 75 anonymous error reports. The narratives were examined to identify the chain of events and the predominant proximal errors. We tabulated the consequences to patients, both reported by physicians and inferred by investigators. RESULTS A chain of errors was documented in 77% of incidents. Although 83% of the errors that ultimately occurred were mistakes in treatment or diagnosis, 2 of 3 were set in motion by errors in communication. Fully 80% of the errors that initiated cascades involved informational or personal miscommunication. Examples of informational miscommunication included communication breakdowns among colleagues and with patients (44%), misinformation in the medical record (21%), mishandling of patients’ requests and messages (18%), inaccessible medical records (12%), and inadequate reminder systems (5%). When asked whether the patient was harmed, physicians answered affirmatively in 43% of cases in which their narratives described harms. Psychological and emotional effects accounted for 17% of physician-reported consequences but 69% of investigator-inferred consequences. CONCLUSIONS Cascade analysis of physicians’ error reports is helpful in understanding the precipitant chain of events, but physicians provide incomplete information about how patients are affected. Miscommunication appears to play an important role in propagating diagnostic and treatment mistakes. PMID:15335130
TU-AB-202-03: Prediction of PET Transfer Uncertainty by DIR Error Estimating Software, AUTODIRECT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, H; Chen, J; Phillips, J
2016-06-15
Purpose: Deformable image registration (DIR) is a powerful tool, but DIR errors can adversely affect its clinical applications. To estimate voxel-specific DIR uncertainty, a software tool, called AUTODIRECT (automated DIR evaluation of confidence tool), has been developed and validated. This work tests the ability of this software to predict uncertainty for the transfer of standard uptake values (SUV) from positron-emission tomography (PET) with DIR. Methods: Virtual phantoms are used for this study. Each phantom has a planning computed tomography (CT) image and a diagnostic PET-CT image set. A deformation was digitally applied to the diagnostic CT to create the planningmore » CT image and establish a known deformation between the images. One lung and three rectum patient datasets were employed to create the virtual phantoms. Both of these sites have difficult deformation scenarios associated with them, which can affect DIR accuracy (lung tissue sliding and changes in rectal filling). The virtual phantoms were created to simulate these scenarios by introducing discontinuities in the deformation field at the lung rectum border. The DIR algorithm from Plastimatch software was applied to these phantoms. The SUV mapping errors from the DIR were then compared to that predicted by AUTODIRECT. Results: The SUV error distributions closely followed the AUTODIRECT predicted error distribution for the 4 test cases. The minimum and maximum PET SUVs were produced from AUTODIRECT at 95% confidence interval before applying gradient-based SUV segmentation for each of these volumes. Notably, 93.5% of the target volume warped by the true deformation was included within the AUTODIRECT-predicted maximum SUV volume after the segmentation, while 78.9% of the target volume was within the target volume warped by Plastimatch. Conclusion: The AUTODIRECT framework is able to predict PET transfer uncertainty caused by DIR, which enables an understanding of the associated target volume uncertainty.« less
Missed diagnostic opportunities within South Africa's early infant diagnosis program, 2010-2015.
Haeri Mazanderani, Ahmad; Moyo, Faith; Sherman, Gayle G
2017-01-01
Samples submitted for HIV PCR testing that fail to yield a positive or negative result represent missed diagnostic opportunities. We describe HIV PCR test rejections and indeterminate results, and the associated delay in diagnosis, within South Africa's early infant diagnosis (EID) program from 2010 to 2015. HIV PCR test data from January 2010 to December 2015 were extracted from the National Health Laboratory Service Corporate Data Warehouse, a central data repository of all registered test-sets within the public health sector in South Africa, by laboratory number, result, date, facility, and testing laboratory. Samples that failed to yield either a positive or negative result were categorized according to the rejection code on the laboratory information system, and descriptive analysis performed using Microsoft Excel. Delay in diagnosis was calculated for patients who had a missed diagnostic opportunity registered between January 2013 and December 2015 by means of a patient linking-algorithm employing demographic details. Between 2010 and 2015, 2 178 582 samples were registered for HIV PCR testing of which 6.2% (n = 134 339) failed to yield either a positive or negative result, decreasing proportionally from 7.0% (n = 20 556) in 2010 to 4.4% (n = 21 388) in 2015 (p<0.001). Amongst 76 972 coded missed diagnostic opportunities, 49 585 (64.4%) were a result of pre-analytical error and 27 387 (35.6%) analytical error. Amongst 49 694 patients searched for follow-up results, 16 895 (34.0%) had at least one subsequent HIV PCR test registered after a median of 29 days (IQR: 13-57), of which 8.4% tested positive compared with 3.6% of all samples submitted for the same period. Routine laboratory data provides the opportunity for near real-time surveillance and quality improvement within the EID program. Delay in diagnosis and wastage of resources associated with missed diagnostic opportunities must be addressed and infants actively followed-up as South Africa works towards elimination of mother-to-child transmission.
Parents' versus physicians' values for clinical outcomes in young febrile children.
Kramer, M S; Etezadi-Amoli, J; Ciampi, A; Tange, S M; Drummond, K N; Mills, E L; Bernstein, M L; Leduc, D G
1994-05-01
To compare how parents and physicians value potential clinical outcomes in young children who have a fever but no focus of bacterial infection. Cross-sectional study of 100 parents of well children aged 3 to 24 months, 61 parents of febrile children aged 3 to 24 months, and 56 attending staff physicians working in a children's hospital emergency department. A pretested visual analog scale was used to assess values on a 0-to-1 scale (where 0 is the value of the worst possible outcome, and 1 is the value for the best) for 22 scenarios, grouped in three categories according to severity. Based on the three or four common attributes comprising the scenarios in a given group, each respondent's value function was estimated statistically based on multiattribute utility theory. For outcomes in group 1 (rapidly resolving viral infection with one or more diagnostic tests), no significant group differences were observed. For outcomes in groups 2 (acute infections without long-term sequelae) and 3 (long-term sequelae of urinary tract infection or bacterial meningitis), parents of well children and parents of febrile children had values that were similar to each other but significantly lower than physicians' values for pneumonia with delayed diagnosis, false-positive diagnosis of urinary tract infection, viral meningitis, and unilateral hearing loss. For bacterial meningitis with or without delay, however, the reverse pattern was observed; physicians' values were lower than parents'. In arriving at their judgment for group 2 and 3 scenarios, parents gave significantly greater weight to attributes involving the pain and discomfort of diagnostic tests and to diagnostic error, whereas physicians gave significantly greater weight to attributes involving both short- and long-term morbidity and long-term worry and inconvenience. Parents were significantly more likely to be risk-seeking in the way they weighted the attributes comprising group 2 and 3 scenarios than physicians, ie, they were more willing to risk rare but severe morbidity to avoid the short-term adverse effects of testing. Parents and physicians show fundamental value differences concerning diagnostic testing, diagnostic error, and short- and long-term morbidity; these differences have important implications for diagnostic decision making in the young febrile child.
Automatic lung segmentation using control feedback system: morphology and texture paradigm.
Noor, Norliza M; Than, Joel C M; Rijal, Omar M; Kassim, Rosminah M; Yunus, Ashari; Zeki, Amir A; Anzidei, Michele; Saba, Luca; Suri, Jasjit S
2015-03-01
Interstitial Lung Disease (ILD) encompasses a wide array of diseases that share some common radiologic characteristics. When diagnosing such diseases, radiologists can be affected by heavy workload and fatigue thus decreasing diagnostic accuracy. Automatic segmentation is the first step in implementing a Computer Aided Diagnosis (CAD) that will help radiologists to improve diagnostic accuracy thereby reducing manual interpretation. Automatic segmentation proposed uses an initial thresholding and morphology based segmentation coupled with feedback that detects large deviations with a corrective segmentation. This feedback is analogous to a control system which allows detection of abnormal or severe lung disease and provides a feedback to an online segmentation improving the overall performance of the system. This feedback system encompasses a texture paradigm. In this study we studied 48 males and 48 female patients consisting of 15 normal and 81 abnormal patients. A senior radiologist chose the five levels needed for ILD diagnosis. The results of segmentation were displayed by showing the comparison of the automated and ground truth boundaries (courtesy of ImgTracer™ 1.0, AtheroPoint™ LLC, Roseville, CA, USA). The left lung's performance of segmentation was 96.52% for Jaccard Index and 98.21% for Dice Similarity, 0.61 mm for Polyline Distance Metric (PDM), -1.15% for Relative Area Error and 4.09% Area Overlap Error. The right lung's performance of segmentation was 97.24% for Jaccard Index, 98.58% for Dice Similarity, 0.61 mm for PDM, -0.03% for Relative Area Error and 3.53% for Area Overlap Error. The segmentation overall has an overall similarity of 98.4%. The segmentation proposed is an accurate and fully automated system.
Savant, Deepika; Bajaj, Jaya; Gimenez, Cecilia; Rafael, Oana C; Mirzamani, Neda; Chau, Karen; Klein, Melissa; Das, Kasturi
2017-01-01
Urine cytology is the most frequently utilized test to detect urothelial cancer. Secondary bladder neoplasms need to be recognized as this impacts patient management. We report our experience on nonurothelial malignancies (NUM) detected in urine cytology over a 10-year period. A 10-year retrospective search for patients with biopsy-proven NUM to the urothelial tract yielded 25 urine samples from 14 patients. Two cytopathologists blinded to the original cytology diagnosis reviewed the cytology and histology slides. The incidence, cytomorphologic features, diagnostic accuracy, factors influencing the diagnostic accuracy, and clinical impact of the cytology result were studied. The incidence of NUM was <1%. The male:female ratio was 1.3. An abnormality was detected in 60% of the cases; however, in only 4% of the cases, a primary site was identified accurately. Of the false negatives, 96% was deemed as sampling errors and 4% was interpretational. Patient management was not impacted in any of the false-negative cases due to concurrent or past tissue diagnosis. Colon cancer was the most frequent secondary tumor. Sampling error attributed to the false-negative results. Necrosis and dirty background was often associated with metastatic lesions from colon. Obtaining history of a primary tumor elsewhere was a key factor in diagnosis of a metastatic lesion. Hematopoietic malignancies remain to be a diagnostic challenge. Cytospin preparations were superior for evaluating nuclear detail and background material as opposed to monolayer (Thinprep) technology. Diagnostic accuracy was improved by obtaining immunohistochemistry. Diagn. Cytopathol. 2016. © 2016 Wiley Periodicals, Inc. Diagn. Cytopathol. 2017;45:22-28. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koeylue, U.O.
1997-05-01
An in situ particulate diagnostic/analysis technique is outlined based on the Rayleigh-Debye-Gans polydisperse fractal aggregate (RDG/PFA) scattering interpretation of absolute angular light scattering and extinction measurements. Using proper particle refractive index, the proposed data analysis method can quantitatively yield all aggregate parameters (particle volume fraction, f{sub v}, fractal dimension, D{sub f}, primary particle diameter, d{sub p}, particle number density, n{sub p}, and aggregate size distribution, pdf(N)) without any prior knowledge about the particle-laden environment. The present optical diagnostic/interpretation technique was applied to two different soot-containing laminar and turbulent ethylene/air nonpremixed flames in order to assess its reliability. The aggregate interpretationmore » of optical measurements yielded D{sub f}, d{sub p}, and pdf(N) that are in excellent agreement with ex situ thermophoretic sampling/transmission electron microscope (TS/TEM) observations within experimental uncertainties. However, volume-equivalent single particle models (Rayleigh/Mie) overestimated d{sub p} by about a factor of 3, causing an order of magnitude underestimation in n{sub p}. Consequently, soot surface areas and growth rates were in error by a factor of 3, emphasizing that aggregation effects need to be taken into account when using optical diagnostics for a reliable understanding of soot formation/evolution mechanism in flames. The results also indicated that total soot emissivities were generally underestimated using Rayleigh analysis (up to 50%), mainly due to the uncertainties in soot refractive indices at infrared wavelengths. This suggests that aggregate considerations may not be essential for reasonable radiation heat transfer predictions from luminous flames because of fortuitous error cancellation, resulting in typically a 10 to 30% net effect.« less
Faciszewski, T; Broste, S K; Fardon, D
1997-10-01
The purpose of the present study was to evaluate the accuracy of data regarding diagnoses of spinal disorders in administrative databases at eight different institutions. The records of 189 patients who had been managed for a disorder of the lumbar spine were independently reviewed by a physician who assigned the appropriate diagnostic codes according to the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM). The age range of the 189 patients was seventeen to eighty-four years. The six major diagnostic categories studied were herniation of a lumbar disc, a previous operation on the lumbar spine, spinal stenosis, cauda equina syndrome, acquired spondylolisthesis, and congenital spondylolisthesis. The diagnostic codes assigned by the physician were compared with the codes that had been assigned during the ordinary course of events by personnel in the medical records department of each of the eight hospitals. The accuracy of coding was also compared among the eight hospitals, and it was found to vary depending on the diagnosis. Although there were both false-negative and false-positive codes at each institution, most errors were related to the low sensitivity of coding for previous spinal operations: only seventeen (28 per cent) of sixty-one such diagnoses were coded correctly. Other errors in coding were less frequent, but their implications for conclusions drawn from the information in administrative databases depend on the frequency of a diagnosis and its importance in an analysis. This study demonstrated that the accuracy of a diagnosis of a spinal disorder recorded in an administrative database varies according to the specific condition being evaluated. It is necessary to document the relative accuracy of specific ICD-9-CM diagnostic codes in order to improve the ability to validate the conclusions derived from investigations based on administrative databases.
Failure mode analysis in adrenal vein sampling: a single-center experience.
Trerotola, Scott O; Asmar, Melissa; Yan, Yan; Fraker, Douglas L; Cohen, Debbie L
2014-10-01
To analyze failure modes in a high-volume adrenal vein sampling (AVS) practice in an effort to identify preventable causes of nondiagnostic sampling. A retrospective database was constructed containing 343 AVS procedures performed over a 10-year period. Each nondiagnostic AVS procedure was reviewed for failure mode and correlated with results of any repeat AVS. Data collected included selectivity index, lateralization index, adrenalectomy outcomes if performed, and details of AVS procedure. All AVS procedures were performed after cosyntropin stimulation, using sequential technique. AVS was nondiagnostic in 12 of 343 (3.5%) primary procedures and 2 secondary procedures. Failure was right-sided in 8 (57%) procedures, left-sided in 4 (29%) procedures, bilateral in 1 procedure, and neither in 1 procedure (laboratory error). Failure modes included diluted sample from correctly identified vein (n = 7 [50%]; 3 right and 4 left), vessel misidentified as adrenal vein (n = 3 [21%]; all right), failure to locate an adrenal vein (n = 2 [14%]; both right), cosyntropin stimulation failure (n = 1 [7%]; diagnostic by nonstimulated criteria), and laboratory error (n = 1 [7%]; specimen loss). A second AVS procedure was diagnostic in three of five cases (60%), and a third AVS procedure was diagnostic in one of one case (100%). Among the eight patients in whom AVS ultimately was not diagnostic, four underwent adrenalectomy based on diluted AVS samples, and one underwent adrenalectomy based on imaging; all five experienced improvement in aldosteronism. A substantial percentage of AVS failures occur on the left, all related to dilution. Even when technically nondiagnostic per strict criteria, some "failed" AVS procedures may be sufficient to guide therapy. Repeat AVS has a good yield. Copyright © 2014 SIR. Published by Elsevier Inc. All rights reserved.
Multimodal correlation and intraoperative matching of virtual models in neurosurgery
NASA Technical Reports Server (NTRS)
Ceresole, Enrico; Dalsasso, Michele; Rossi, Aldo
1994-01-01
The multimodal correlation between different diagnostic exams, the intraoperative calibration of pointing tools and the correlation of the patient's virtual models with the patient himself, are some examples, taken from the biomedical field, of a unique problem: determine the relationship linking representation of the same object in different reference frames. Several methods have been developed in order to determine this relationship, among them, the surface matching method is one that gives the patient minimum discomfort and the errors occurring are compatible with the required precision. The surface matching method has been successfully applied to the multimodal correlation of diagnostic exams such as CT, MR, PET and SPECT. Algorithms for automatic segmentation of diagnostic images have been developed to extract the reference surfaces from the diagnostic exams, whereas the surface of the patient's skull has been monitored, in our approach, by means of a laser sensor mounted on the end effector of an industrial robot. An integrated system for virtual planning and real time execution of surgical procedures has been realized.
Evaluation of diagnostic accuracy in detecting ordered symptom statuses without a gold standard
Wang, Zheyu; Zhou, Xiao-Hua; Wang, Miqu
2011-01-01
Our research is motivated by 2 methodological problems in assessing diagnostic accuracy of traditional Chinese medicine (TCM) doctors in detecting a particular symptom whose true status has an ordinal scale and is unknown—imperfect gold standard bias and ordinal scale symptom status. In this paper, we proposed a nonparametric maximum likelihood method for estimating and comparing the accuracy of different doctors in detecting a particular symptom without a gold standard when the true symptom status had an ordered multiple class. In addition, we extended the concept of the area under the receiver operating characteristic curve to a hyper-dimensional overall accuracy for diagnostic accuracy and alternative graphs for displaying a visual result. The simulation studies showed that the proposed method had good performance in terms of bias and mean squared error. Finally, we applied our method to our motivating example on assessing the diagnostic abilities of 5 TCM doctors in detecting symptoms related to Chills disease. PMID:21209155
Artificial neural networks in mammography interpretation and diagnostic decision making.
Ayer, Turgay; Chen, Qiushi; Burnside, Elizabeth S
2013-01-01
Screening mammography is the most effective means for early detection of breast cancer. Although general rules for discriminating malignant and benign lesions exist, radiologists are unable to perfectly detect and classify all lesions as malignant and benign, for many reasons which include, but are not limited to, overlap of features that distinguish malignancy, difficulty in estimating disease risk, and variability in recommended management. When predictive variables are numerous and interact, ad hoc decision making strategies based on experience and memory may lead to systematic errors and variability in practice. The integration of computer models to help radiologists increase the accuracy of mammography examinations in diagnostic decision making has gained increasing attention in the last two decades. In this study, we provide an overview of one of the most commonly used models, artificial neural networks (ANNs), in mammography interpretation and diagnostic decision making and discuss important features in mammography interpretation. We conclude by discussing several common limitations of existing research on ANN-based detection and diagnostic models and provide possible future research directions.
Improving NAVFAC's total quality management of construction drawings with CLIPS
NASA Technical Reports Server (NTRS)
Antelman, Albert
1991-01-01
A diagnostic expert system to improve the quality of Naval Facilities Engineering Command (NAVFAC) construction drawings and specification is described. C Language Integrated Production System (CLIPS) and computer aided design layering standards are used in an expert system to check and coordinate construction drawings and specifications to eliminate errors and omissions.
Clinical Cognition and Diagnostic Error: Applications of a Dual Process Model of Reasoning
ERIC Educational Resources Information Center
Croskerry, Pat
2009-01-01
Both systemic and individual factors contribute to missed or delayed diagnoses. Among the multiple factors that impact clinical performance of the individual, the caliber of cognition is perhaps the most relevant and deserves our attention and understanding. In the last few decades, cognitive psychologists have gained substantial insights into the…
ERIC Educational Resources Information Center
Wilcox, Gabrielle; Schroeder, Meadow
2015-01-01
Psychoeducational assessment involves collecting, organizing, and interpreting a large amount of data from various sources. Drawing upon psychological and medical literature, we review two main approaches to clinical reasoning (deductive and inductive) and how they synergistically guide diagnostic decision-making. In addition, we discuss how the…
Modelling Transposition Latencies: Constraints for Theories of Serial Order Memory
ERIC Educational Resources Information Center
Farrell, Simon; Lewandowsky, Stephan
2004-01-01
Several competing theories of short-term memory can explain serial recall performance at a quantitative level. However, most theories to date have not been applied to the accompanying pattern of response latencies, thus ignoring a rich and highly diagnostic aspect of performance. This article explores and tests the error latency predictions of…
Spotting Erroneous Rules of Operation by the Individual Consistency Index.
ERIC Educational Resources Information Center
Tatsuoka, Kikumi K.; Tatsuoka, Maurice M.
1983-01-01
This study introduces the individual consistency index (ICI), which measures the extent to which patterns of responses to parallel sets of items remain consistent over time. ICI is used as an error diagnostic tool to detect aberrant response patterns resulting from the consistent application of erroneous rules of operation. (Author/PN)
ERIC Educational Resources Information Center
McIntosh, Beth; Dodd, Barbara
2009-01-01
Children with unintelligible speech differ in severity, underlying deficit, type of surface error patterns and response to treatment. Detailed treatment case studies, evaluating specific intervention protocols for particular diagnostic groups, can identify best practice for children with speech disorder. Three treatment case studies evaluated the…
ATS-PD: An Adaptive Testing System for Psychological Disorders
ERIC Educational Resources Information Center
Donadello, Ivan; Spoto, Andrea; Sambo, Francesco; Badaloni, Silvana; Granziol, Umberto; Vidotto, Giulio
2017-01-01
The clinical assessment of mental disorders can be a time-consuming and error-prone procedure, consisting of a sequence of diagnostic hypothesis formulation and testing aimed at restricting the set of plausible diagnoses for the patient. In this article, we propose a novel computerized system for the adaptive testing of psychological disorders.…
Clinical Problem Analysis (CPA): A Systematic Approach To Teaching Complex Medical Problem Solving.
ERIC Educational Resources Information Center
Custers, Eugene J. F. M.; Robbe, Peter F. De Vries; Stuyt, Paul M. J.
2000-01-01
Discusses clinical problem analysis (CPA) in medical education, an approach to solving complex clinical problems. Outlines the five step CPA model and examines the value of CPA's content-independent (methodical) approach. Argues that teaching students to use CPA will enable them to avoid common diagnostic reasoning errors and pitfalls. Compares…
The work here complements the overview analysis of the modelling systems participating in the third phase of the Air Quality Model Evaluation International Initiative (AQMEII3) by focusing on the performance for hourly surface ozone by two modelling systems, Chimere for Europe an...
The Development and Evaluation of Listening and Speaking Diagnosis and Remedial Teaching System
ERIC Educational Resources Information Center
Hsiao, Hsien-Sheng; Chang, Cheng-Sian; Lin, Chiou-Yan; Chen, Berlin; Wu, Chia-Hou; Lin, Chien-Yu
2016-01-01
In this study, a system was developed to offer adaptive remedial instruction materials to learners of Chinese as a foreign language (CFL). The Chinese Listening and Speaking Diagnosis and Remedial Instruction (CLSDRI) system integrated computerized diagnostic tests and remedial instruction materials to diagnose errors made in listening…
Development of a smartphone-based pulse oximeter with adaptive SNR/power balancing.
Phelps, Tom; Haowei Jiang; Hall, Drew A
2017-07-01
Millions worldwide suffer from diseases that exhibit early warnings signs that can be detected by standard clinical-grade diagnostic tools. Unfortunately, such tools are often prohibitively expensive to the developing world leading to inadequate healthcare and high mortality rates. To address this problem, a smartphone-based pulse oximeter is presented that interfaces with the phone through the audio jack, enabling point-of-care measurements of heart rate (HR) and oxygen saturation (SpO 2 ). The device is designed to utilize existing phone resources (e.g., the processor, battery, and memory) resulting in a more portable and inexpensive diagnostic tool than standalone equivalents. By adaptively tuning the LED driving signal, the device is less dependent on phone-specific audio jack properties than prior audio jack-based work making it universally compatible with all smartphones. We demonstrate that the pulse oximeter can adaptively optimize the signal-to-noise ratio (SNR) within the power constraints of a mobile phone (<; 10mW) while maintaining high accuracy (HR error <; 3.4% and SpO 2 error <; 3.7%) against a clinical grade instrument.
Kulkarni, H R; Kamal, M M; Arjune, D G
1999-12-01
The scoring system developed by Mair et al. (Acta Cytol 1989;33:809-813) is frequently used to grade the quality of cytology smears. Using a one-factor analytic structural equations model, we demonstrate that the errors in measurement of the parameters used in the Mair scoring system are highly and significantly correlated. We recommend the use of either a multiplicative scoring system, using linear scores, or an additive scoring system, using exponential scores, to correct for the correlated errors. We suggest that the 0, 1, and 2 points used in the Mair scoring system be replaced by 1, 2, and 4, respectively. Using data on fine-needle biopsies of 200 thyroid lesions by both fine-needle aspiration (FNA) and fine-needle capillary sampling (FNC), we demonstrate that our modification of the Mair scoring system is more sensitive and more consistent with the structural equations model. Therefore, we recommend that the modified Mair scoring system be used for classifying the diagnostic adequacy of cytology smears. Diagn. Cytopathol. 1999;21:387-393. Copyright 1999 Wiley-Liss, Inc.
Wafer-level colinearity monitoring for TFH applications
NASA Astrophysics Data System (ADS)
Moore, Patrick; Newman, Gary; Abreau, Kelly J.
2000-06-01
Advances in thin film head (TFH) designs continue to outpace those in the IC industry. The transition to giant magneto resistive (GMR) designs is underway along with the push toward areal densities in the 20 Gbit/inch2 regime and beyond. This comes at a time when the popularity of the low-cost personal computer (PC) is extremely high, and PC prices are continuing to fall. Consequently, TFH manufacturers are forced to deal with pricing pressure in addition to technological demands. New methods of monitoring and improving yield are required along with advanced head designs. TFH manufacturing is a two-step process. The first is a wafer-level process consisting of manufacturing devices on substrates using processes similar to those in the IC industry. The second half is a slider-level process where wafers are diced into 'rowbars' containing many heads. Each rowbar is then lapped to obtain the desired performance from each head. Variation in the placement of specific layers of each device on the bar, known as a colinearity error, causes a change in device performance and directly impacts yield. The photolithography tool and process contribute to colinearity errors. These components include stepper lens distortion errors, stepper stage errors, reticle fabrication errors, and CD uniformity errors. Currently, colinearity is only very roughly estimated during wafer-level TFH production. An absolute metrology tool, such as a Nikon XY, could be used to quantify colinearity with improved accuracy, but this technique is impractical since TFH manufacturers typically do not have this type of equipment at the production site. More importantly, this measurement technique does not provide the rapid feedback needed in a high-volume production facility. Consequently, the wafer-fab must rely on resistivity-based measurements from slider-fab to quantify colinearity errors. The feedback of this data may require several weeks, making it useless as a process diagnostic. This study examines a method of quickly estimating colinearity at the wafer-level with a test reticle and metrology equipment routinely found in TFH facilities. Colinearity results are correlated to slider-fab measurements on production devices. Stepper contributions to colinearity are estimated, and compared across multiple steppers and stepper generations. Multiple techniques of integrating this diagnostic into production are investigated and discussed.
The effects of center of rotation errors on cardiac SPECT imaging
NASA Astrophysics Data System (ADS)
Bai, Chuanyong; Shao, Ling; Ye, Jinghan; Durbin, M.
2003-10-01
In SPECT imaging, center of rotation (COR) errors lead to the misalignment of projection data and can potentially degrade the quality of the reconstructed images. In this work, we study the effects of COR errors on cardiac SPECT imaging using simulation, point source, cardiac phantom, and patient studies. For simulation studies, we generate projection data using a uniform MCAT phantom first without modeling any physical effects (NPH), then with the modeling of detector response effect (DR) alone. We then corrupt the projection data with simulated sinusoid and step COR errors. For other studies, we introduce sinusoid COR errors to projection data acquired on SPECT systems. An OSEM algorithm is used for image reconstruction without detector response correction, but with nonuniform attenuation correction when needed. The simulation studies show that, when COR errors increase from 0 to 0.96 cm: 1) sinusoid COR errors in axial direction lead to intensity decrease in the inferoapical region; 2) step COR errors in axial direction lead to intensity decrease in the distal anterior region. The intensity decrease is more severe in images reconstructed from projection data with NPH than with DR; and 3) the effects of COR errors in transaxial direction seem to be insignificant. In other studies, COR errors slightly degrade point source resolution; COR errors of 0.64 cm or above introduce visible but insignificant nonuniformity in the images of uniform cardiac phantom; COR errors up to 0.96 cm in transaxial direction affect the lesion-to-background contrast (LBC) insignificantly in the images of cardiac phantom with defects, and COR errors up to 0.64 cm in axial direction only slightly decrease the LBC. For the patient studies with COR errors up to 0.96 cm, images have the same diagnostic/prognostic values as those without COR errors. This work suggests that COR errors of up to 0.64 cm are not likely to change the clinical applications of cardiac SPECT imaging when using iterative reconstruction algorithm without detector response correction.
NASA Astrophysics Data System (ADS)
Huber, Matthew S.; Ferriãre, Ludovic; Losiak, Anna; Koeberl, Christian
2011-09-01
Abstract- Planar deformation features (PDFs) in quartz, one of the most commonly used diagnostic indicators of shock metamorphism, are planes of amorphous material that follow crystallographic orientations, and can thus be distinguished from non-shock-induced fractures in quartz. The process of indexing data for PDFs from universal-stage measurements has traditionally been performed using a manual graphical method, a time-consuming process in which errors can easily be introduced. A mathematical method and computer algorithm, which we call the Automated Numerical Index Executor (ANIE) program for indexing PDFs, was produced, and is presented here. The ANIE program is more accurate and faster than the manual graphical determination of Miller-Bravais indices, as it allows control of the exact error used in the calculation and removal of human error from the process.
AtomDB: Expanding an Accessible and Accurate Atomic Database for X-ray Astronomy
NASA Astrophysics Data System (ADS)
Smith, Randall
Since its inception in 2001, the AtomDB has become the standard repository of accurate and accessible atomic data for the X-ray astrophysics community, including laboratory astrophysicists, observers, and modelers. Modern calculations of collisional excitation rates now exist - and are in AtomDB - for all abundant ions in a hot plasma. AtomDB has expanded beyond providing just a collisional model, and now also contains photoionization data from XSTAR as well as a charge exchange model, amongst others. However, building and maintaining an accurate and complete database that can fully exploit the diagnostic potential of high-resolution X-ray spectra requires further work. The Hitomi results, sadly limited as they were, demonstrated the urgent need for the best possible wavelength and rate data, not merely for the strongest lines but for the diagnostic features that may have 1% or less of the flux of the strong lines. In particular, incorporation of weak but powerfully diagnostic satellite lines will be crucial to understanding the spectra expected from upcoming deep observations with Chandra and XMM-Newton, as well as the XARM and Athena satellites. Beyond incorporating this new data, a number of groups, both experimental and theoretical, have begun to produce data with errors and/or sensitivity estimates. We plan to use this to create statistically meaningful spectral errors on collisional plasmas, providing practical uncertainties together with model spectra. We propose to continue to (1) engage the X-ray astrophysics community regarding their issues and needs, notably by a critical comparison with other related databases and tools, (2) enhance AtomDB to incorporate a large number of satellite lines as well as updated wavelengths with error estimates, (3) continue to update the AtomDB with the latest calculations and laboratory measurements, in particular velocity-dependent charge exchange rates, and (4) enhance existing tools, and create new ones as needed to increase the functionality of, and access to, AtomDB.
Applying EVM to Satellite on Ground and In-Orbit Testing - Better Data in Less Time
NASA Technical Reports Server (NTRS)
Peters, Robert; Lebbink, Elizabeth-Klein; Lee, Victor; Model, Josh; Wezalis, Robert; Taylor, John
2008-01-01
Using Error Vector Magnitude (EVM) in satellite integration and test allows rapid verification of the Bit Error Rate (BER) performance of a satellite link and is particularly well suited to measurement of low bit rate satellite links where it can result in a major reduction in test time (about 3 weeks per satellite for the Geosynchronous Operational Environmental Satellite [GOES] satellites during ground test) and can provide diagnostic information. Empirical techniques developed to predict BER performance from EVM measurements and lessons learned about applying these techniques during GOES N, O, and P integration test and post launch testing, are discussed.
NASA Astrophysics Data System (ADS)
Nikitaev, V. G.; Nagornov, O. V.; Pronichev, A. N.; Polyakov, E. V.; Dmitrieva, V. V.
2017-12-01
The first stage of diagnostics of blood cancer is the analysis of blood smears. The application of decision-making support systems would reduce the subjectivity of the diagnostic process and avoid errors, resulting in often irreversible changes in the patient's condition. In this regard, the solution of this problem requires the use of modern technology. One of the tools of the program classification of blood cells are texture features, and the task of finding informative among them is promising. The paper investigates the effect of noise of the image sensor to informative texture features with application of methods of mathematical modelling.
Describing Phonological Paraphasias in Three Variants of Primary Progressive Aphasia.
Dalton, Sarah Grace Hudspeth; Shultz, Christine; Henry, Maya L; Hillis, Argye E; Richardson, Jessica D
2018-03-01
The purpose of this study was to describe the linguistic environment of phonological paraphasias in 3 variants of primary progressive aphasia (semantic, logopenic, and nonfluent) and to describe the profiles of paraphasia production for each of these variants. Discourse samples of 26 individuals diagnosed with primary progressive aphasia were investigated for phonological paraphasias using the criteria established for the Philadelphia Naming Test (Moss Rehabilitation Research Institute, 2013). Phonological paraphasias were coded for paraphasia type, part of speech of the target word, target word frequency, type of segment in error, word position of consonant errors, type of error, and degree of change in consonant errors. Eighteen individuals across the 3 variants produced phonological paraphasias. Most paraphasias were nonword, followed by formal, and then mixed, with errors primarily occurring on nouns and verbs, with relatively few on function words. Most errors were substitutions, followed by addition and deletion errors, and few sequencing errors. Errors were evenly distributed across vowels, consonant singletons, and clusters, with more errors occurring in initial and medial positions of words than in the final position of words. Most consonant errors consisted of only a single-feature change, with few 2- or 3-feature changes. Importantly, paraphasia productions by variant differed from these aggregate results, with unique production patterns for each variant. These results suggest that a system where paraphasias are coded as present versus absent may be insufficient to adequately distinguish between the 3 subtypes of PPA. The 3 variants demonstrate patterns that may be used to improve phenotyping and diagnostic sensitivity. These results should be integrated with recent findings on phonological processing and speech rate. Future research should attempt to replicate these results in a larger sample of participants with longer speech samples and varied elicitation tasks. https://doi.org/10.23641/asha.5558107.
Characterizing the impact of model error in hydrologic time series recovery inverse problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Scott K.; He, Jiachuan; Vesselinov, Velimir V.
Hydrologic models are commonly over-smoothed relative to reality, owing to computational limitations and to the difficulty of obtaining accurate high-resolution information. When used in an inversion context, such models may introduce systematic biases which cannot be encapsulated by an unbiased “observation noise” term of the type assumed by standard regularization theory and typical Bayesian formulations. Despite its importance, model error is difficult to encapsulate systematically and is often neglected. In this paper, model error is considered for an important class of inverse problems that includes interpretation of hydraulic transients and contaminant source history inference: reconstruction of a time series thatmore » has been convolved against a transfer function (i.e., impulse response) that is only approximately known. Using established harmonic theory along with two results established here regarding triangular Toeplitz matrices, upper and lower error bounds are derived for the effect of systematic model error on time series recovery for both well-determined and over-determined inverse problems. It is seen that use of additional measurement locations does not improve expected performance in the face of model error. A Monte Carlo study of a realistic hydraulic reconstruction problem is presented, and the lower error bound is seen informative about expected behavior. Finally, a possible diagnostic criterion for blind transfer function characterization is also uncovered.« less
Quality and patient safety in the diagnosis of breast cancer.
Raab, Stephen S; Swain, Justin; Smith, Natasha; Grzybicki, Dana M
2013-09-01
The media, medical legal, and safety science perspectives of a laboratory medical error differ and assign variable levels of responsibility on individuals and systems. We examine how the media identifies, communicates, and interprets information related to anatomic pathology breast diagnostic errors compared to groups using a safety science Lean-based quality improvement perspective. The media approach focuses on the outcome of error from the patient perspective and some errors have catastrophic consequences. The medical safety science perspective does not ignore the importance of patient outcome, but focuses on causes including the active events and latent factors that contribute to the error. Lean improvement methods deconstruct work into individual steps consisting of tasks, communications, and flow in order to understand the affect of system design on current state levels of quality. In the Lean model, system redesign to reduce errors depends on front-line staff knowledge and engagement to change the components of active work to develop best practices. In addition, Lean improvement methods require organizational and environmental alignment with the front-line change in order to improve the latent conditions affecting components such as regulation, education, and safety culture. Although we examine instances of laboratory error for a specific test in surgical pathology, the same model of change applies to all areas of the laboratory. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.
Characterizing the impact of model error in hydrologic time series recovery inverse problems
Hansen, Scott K.; He, Jiachuan; Vesselinov, Velimir V.
2017-10-28
Hydrologic models are commonly over-smoothed relative to reality, owing to computational limitations and to the difficulty of obtaining accurate high-resolution information. When used in an inversion context, such models may introduce systematic biases which cannot be encapsulated by an unbiased “observation noise” term of the type assumed by standard regularization theory and typical Bayesian formulations. Despite its importance, model error is difficult to encapsulate systematically and is often neglected. In this paper, model error is considered for an important class of inverse problems that includes interpretation of hydraulic transients and contaminant source history inference: reconstruction of a time series thatmore » has been convolved against a transfer function (i.e., impulse response) that is only approximately known. Using established harmonic theory along with two results established here regarding triangular Toeplitz matrices, upper and lower error bounds are derived for the effect of systematic model error on time series recovery for both well-determined and over-determined inverse problems. It is seen that use of additional measurement locations does not improve expected performance in the face of model error. A Monte Carlo study of a realistic hydraulic reconstruction problem is presented, and the lower error bound is seen informative about expected behavior. Finally, a possible diagnostic criterion for blind transfer function characterization is also uncovered.« less
Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Rinehart, Aidan W.
2015-01-01
This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.
Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Rinehart, Aidan W.
2016-01-01
This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.
Kumar, Savitha Anil; Jayanna, Prashanth; Prabhudesai, Shilpa; Kumar, Ajai
2014-01-01
To collect and tabulate errors and nonconformities in the preanalytical, analytical, and postanalytical process phases in a diagnostic clinical laboratory that supports a super-specialty cancer center in India, and identify areas of potential improvement in patient services. We collected data from our laboratory during a period of 24 months. Departments in the study included clinical biochemistry, hematology, clinical pathology, microbiology and serology, surgical pathology, and molecular pathology. We had initiated quality assessment based on international standards in our laboratory in 2010, with the aim of obtaining accreditation by national and international governing bodies. We followed the guidelines specified by International Organization for Standardization (ISO) 15189:2007 to identify noncompliant elements of our processes. Among a total of 144,030 specimens that our referral laboratory received during the 2-year period of our study, we uncovered an overall error rate for all 3 process phases of 1.23%; all of our error rates closely approximated the results from our peer institutions. Errors were most common in the preanalytical phase in both years of study; preanalytical- and postanalytical-phase errors constituted more than 90% of all errors. Further improvements are warranted in laboratory services and are contingent on adequate training and interdepartmental communication and cooperation. Copyright© by the American Society for Clinical Pathology (ASCP).
Schiefer, Ulrich; Kraus, Christina; Baumbach, Peter; Ungewiß, Judith; Michels, Ralf
2016-10-14
All over the world, refractive errors are among the most frequently occuring treatable distur - bances of visual function. Ametropias have a prevalence of nearly 70% among adults in Germany and are thus of great epidemiologic and socio-economic relevance. In the light of their own clinical experience, the authors review pertinent articles retrieved by a selective literature search employing the terms "ametropia, "anisometropia," "refraction," "visual acuity," and epidemiology." In 2011, only 31% of persons over age 16 in Germany did not use any kind of visual aid; 63.4% wore eyeglasses and 5.3% wore contact lenses. Refractive errors were the most common reason for consulting an ophthalmologist, accounting for 21.1% of all outpatient visits. A pinhole aperture (stenopeic slit) is a suitable instrument for the basic diagnostic evaluation of impaired visual function due to optical factors. Spherical refractive errors (myopia and hyperopia), cylindrical refractive errors (astigmatism), unequal refractive errors in the two eyes (anisometropia), and the typical optical disturbance of old age (presbyopia) cause specific functional limitations and can be detected by a physician who does not need to be an ophthalmologist. Simple functional tests can be used in everyday clinical practice to determine quickly, easily, and safely whether the patient is suffering from a benign and easily correctable type of visual impairment, or whether there are other, more serious underlying causes.
Insight into biases and sequencing errors for amplicon sequencing with the Illumina MiSeq platform.
Schirmer, Melanie; Ijaz, Umer Z; D'Amore, Rosalinda; Hall, Neil; Sloan, William T; Quince, Christopher
2015-03-31
With read lengths of currently up to 2 × 300 bp, high throughput and low sequencing costs Illumina's MiSeq is becoming one of the most utilized sequencing platforms worldwide. The platform is manageable and affordable even for smaller labs. This enables quick turnaround on a broad range of applications such as targeted gene sequencing, metagenomics, small genome sequencing and clinical molecular diagnostics. However, Illumina error profiles are still poorly understood and programs are therefore not designed for the idiosyncrasies of Illumina data. A better knowledge of the error patterns is essential for sequence analysis and vital if we are to draw valid conclusions. Studying true genetic variation in a population sample is fundamental for understanding diseases, evolution and origin. We conducted a large study on the error patterns for the MiSeq based on 16S rRNA amplicon sequencing data. We tested state-of-the-art library preparation methods for amplicon sequencing and showed that the library preparation method and the choice of primers are the most significant sources of bias and cause distinct error patterns. Furthermore we tested the efficiency of various error correction strategies and identified quality trimming (Sickle) combined with error correction (BayesHammer) followed by read overlapping (PANDAseq) as the most successful approach, reducing substitution error rates on average by 93%. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Errors in the ultrasound diagnosis of the kidneys, ureters and urinary bladder
Wieczorek, Andrzej Paweł; Tyloch, Janusz F.
2013-01-01
The article presents the most frequent errors made in the ultrasound diagnosis of the urinary system. They usually result from improper technique of ultrasound examination or its erroneous interpretation. Such errors are frequent effects of insufficient experience of the ultrasonographer, inadequate class of the scanner, insufficient knowledge of its operation as well as of wrong preparation of patients, their constitution, severe condition and the lack of cooperation during the examination. The reasons for misinterpretations of ultrasound images of the urinary system may lie in a large polymorphism of the kidney (defects and developmental variants) and may result from improper access to the organ as well as from the presence of artefacts. Errors may also result from the lack of knowledge concerning clinical and laboratory data. Moreover, mistakes in ultrasound diagnosis of the urinary system are frequently related to the lack of knowledge of the management algorithms and diagnostic possibilities of other imaging modalities. The paper lists errors in ultrasound diagnosis of the urinary system divided into: errors resulting from improper technique of examination, artefacts caused by incorrect preparation of patients for the examination or their constitution and errors resulting from misinterpretation of ultrasound images of the kidneys (such as their number, size, fluid spaces, pathological lesions and others), ureters and urinary bladder. Each physician performing kidney or bladder ultrasound examination should possess the knowledge of the most frequent errors and their causes which might help to avoid them. PMID:26674139
Disclosing harmful medical errors to patients: tackling three tough cases.
Gallagher, Thomas H; Bell, Sigall K; Smith, Kelly M; Mello, Michelle M; McDonald, Timothy B
2009-09-01
A gap exists between recommendations to disclose errors to patients and current practice. This gap may reflect important, yet unanswered questions about implementing disclosure principles. We explore some of these unanswered questions by presenting three real cases that pose challenging disclosure dilemmas. The first case involves a pancreas transplant that failed due to the pancreas graft being discarded, an error that was not disclosed partly because the family did not ask clarifying questions. Relying on patient or family questions to determine the content of disclosure is problematic. We propose a standard of materiality that can help clinicians to decide what information to disclose. The second case involves a fatal diagnostic error that the patient's widower was unaware had happened. The error was not disclosed out of concern that disclosure would cause the widower more harm than good. This case highlights how institutions can overlook patients' and families' needs following errors and emphasizes that benevolent deception has little role in disclosure. Institutions should consider whether involving neutral third parties could make disclosures more patient centered. The third case presents an intraoperative cardiac arrest due to a large air embolism where uncertainty around the clinical event was high and complicated the disclosure. Uncertainty is common to many medical errors but should not deter open conversations with patients and families about what is and is not known about the event. Continued discussion within the medical profession about applying disclosure principles to real-world cases can help to better meet patients' and families' needs following medical errors.
ERIC Educational Resources Information Center
Martin-Blas, Teresa; Seidel, Luis; Serrano-Fernandez, Ana
2010-01-01
This work presents the results of a study whose aim is to detect systematic errors about the concept of force among freshmen students. The researchers analysed the results of the Force Concept Inventory test, which was administered to two different groups of students. The results show that, although there were significant performance variations…
Verification bias an underrecognized source of error in assessing the efficacy of medical imaging.
Petscavage, Jonelle M; Richardson, Michael L; Carr, Robert B
2011-03-01
Diagnostic tests are validated by comparison against a "gold standard" reference test. When the reference test is invasive or expensive, it may not be applied to all patients. This can result in biased estimates of the sensitivity and specificity of the diagnostic test. This type of bias is called "verification bias," and is a common problem in imaging research. The purpose of our study is to estimate the prevalence of verification bias in the recent radiology literature. All issues of the American Journal of Roentgenology (AJR), Academic Radiology, Radiology, and European Journal of Radiology (EJR) between November 2006 and October 2009 were reviewed for original research articles mentioning sensitivity or specificity as endpoints. Articles were read to determine whether verification bias was present and searched for author recognition of verification bias in the design. During 3 years, these journals published 2969 original research articles. A total of 776 articles used sensitivity or specificity as an outcome. Of these, 211 articles demonstrated potential verification bias. The fraction of articles with potential bias was respectively 36.4%, 23.4%, 29.5%, and 13.4% for AJR, Academic Radiology, Radiology, and EJR. The total fraction of papers with potential bias in which the authors acknowledged this bias was 17.1%. Verification bias is a common and frequently unacknowledged source of error in efficacy studies of diagnostic imaging. Bias can often be eliminated by proper study design. When it cannot be eliminated, it should be estimated and acknowledged. Published by Elsevier Inc.
Comparison of disagreement and error rates for three types of interdepartmental consultations.
Renshaw, Andrew A; Gould, Edwin W
2005-12-01
Previous studies have documented a relatively high rate of disagreement for interdepartmental consultations, but follow-up is limited. We reviewed the results of 3 types of interdepartmental consultations in our hospital during a 2-year period, including 328 incoming, 928 pathologist-generated outgoing, and 227 patient- or clinician-generated outgoing consults. The disagreement rate was significantly higher for incoming consults (10.7%) than for outgoing pathologist-generated consults (5.9%) (P = .06). Disagreement rates for outgoing patient- or clinician-generated consults were not significantly different from either other type (7.9%). Additional consultation, biopsy, or testing follow-up was available for 19 (54%) of 35, 14 (25%) of 55, and 6 (33%) of 18 incoming, outgoing pathologist-generated, and outgoing patient- or clinician-generated consults with disagreements, respectively; the percentage of errors varied widely (15/19 [79%], 8/14 [57%], and 2/6 [33%], respectively), but differences were not significant (P >.05 for each). Review of the individual errors revealed specific diagnostic areas in which improvement in performance might be made. Disagreement rates for interdepartmental consultation ranged from 5.9% to 10.7%, but only 33% to 79% represented errors. Additional consultation, tissue, and testing results can aid in distinguishing disagreements from errors.
Clinical Errors and Medical Negligence
Oyebode, Femi
2013-01-01
This paper discusses the definition, nature and origins of clinical errors including their prevention. The relationship between clinical errors and medical negligence is examined as are the characteristics of litigants and events that are the source of litigation. The pattern of malpractice claims in different specialties and settings is examined. Among hospitalized patients worldwide, 3–16s% suffer injury as a result of medical intervention, the most common being the adverse effects of drugs. The frequency of adverse drug effects appears superficially to be higher in intensive care units and emergency departments but once rates have been corrected for volume of patients, comorbidity of conditions and number of drugs prescribed, the difference is not significant. It is concluded that probably no more than 1 in 7 adverse events in medicine result in a malpractice claim and the factors that predict that a patient will resort to litigation include a prior poor relationship with the clinician and the feeling that the patient is not being kept informed. Methods for preventing clinical errors are still in their infancy. The most promising include new technologies such as electronic prescribing systems, diagnostic and clinical decision-making aids and error-resistant systems. PMID:23343656
Clinical errors and medical negligence.
Oyebode, Femi
2013-01-01
This paper discusses the definition, nature and origins of clinical errors including their prevention. The relationship between clinical errors and medical negligence is examined as are the characteristics of litigants and events that are the source of litigation. The pattern of malpractice claims in different specialties and settings is examined. Among hospitalized patients worldwide, 3-16% suffer injury as a result of medical intervention, the most common being the adverse effects of drugs. The frequency of adverse drug effects appears superficially to be higher in intensive care units and emergency departments but once rates have been corrected for volume of patients, comorbidity of conditions and number of drugs prescribed, the difference is not significant. It is concluded that probably no more than 1 in 7 adverse events in medicine result in a malpractice claim and the factors that predict that a patient will resort to litigation include a prior poor relationship with the clinician and the feeling that the patient is not being kept informed. Methods for preventing clinical errors are still in their infancy. The most promising include new technologies such as electronic prescribing systems, diagnostic and clinical decision-making aids and error-resistant systems. Copyright © 2013 S. Karger AG, Basel.
Patient safety in the care of mentally ill people in Switzerland: Action plan 2016
Richard, Aline; Mascherek, Anna C; Schwappach, David L B
2017-01-01
Background: Patient safety in mental healthcare has not attracted great attention yet, although the burden and the prevalence of mental diseases are high. The risk of errors with potential for harm of patients, such as aggression against self and others or non-drug treatment errors is particularly high in this vulnerable group. Aim: To develop priority topics and strategies for action to foster patient safety in mental healthcare. Method: The Swiss patient safety foundation together with experts conducted round table discussions and a Delphi questionnaire to define topics along the treatment pathway, and to prioritise these topics. Finally, fields of action were developed. Results: An action plan was developed including the definition and prioritization of 9 topics where errors may occur. A global rating task revealed errors concerning diagnostics and structural errors as most important. This led to the development of 4 fields of action (awareness raising, research, implementation, and education and training) including practice-oriented potential starting points to enhance patient safety. Conclusions: The action plan highlights issues of high concern for patient safety in mental healthcare. It serves as a starting point for the development of strategies for action as well as of concrete activities.
Chiolero, Arnaud; Paccaud, Fred; Aujesky, Drahomir; Santschi, Valérie; Rodondi, Nicolas
2015-01-01
Overdiagnosis is the diagnosis of an abnormality that is not associated with a substantial health hazard and that patients have no benefit to be aware of. It is neither a misdiagnosis (diagnostic error), nor a false positive result (positive test in the absence of a real abnormality). It mainly results from screening, use of increasingly sensitive diagnostic tests, incidental findings on routine examinations, and widening diagnostic criteria to define a condition requiring an intervention. The blurring boundaries between risk and disease, physicians' fear of missing a diagnosis and patients' need for reassurance are further causes of overdiagnosis. Overdiagnosis often implies procedures to confirm or exclude the presence of the condition and is by definition associated with useless treatments and interventions, generating harm and costs without any benefit. Overdiagnosis also diverts healthcare professionals from caring about other health issues. Preventing overdiagnosis requires increasing awareness of healthcare professionals and patients about its occurrence, the avoidance of unnecessary and untargeted diagnostic tests, and the avoidance of screening without demonstrated benefits. Furthermore, accounting systematically for the harms and benefits of screening and diagnostic tests and determining risk factor thresholds based on the expected absolute risk reduction would also help prevent overdiagnosis.
Nikolic, Mark I; Sarter, Nadine B
2007-08-01
To examine operator strategies for diagnosing and recovering from errors and disturbances as well as the impact of automation design and time pressure on these processes. Considerable efforts have been directed at error prevention through training and design. However, because errors cannot be eliminated completely, their detection, diagnosis, and recovery must also be supported. Research has focused almost exclusively on error detection. Little is known about error diagnosis and recovery, especially in the context of event-driven tasks and domains. With a confederate pilot, 12 airline pilots flew a 1-hr simulator scenario that involved three challenging automation-related tasks and events that were likely to produce erroneous actions or assessments. Behavioral data were compared with a canonical path to examine pilots' error and disturbance management strategies. Debriefings were conducted to probe pilots' system knowledge. Pilots seldom followed the canonical path to cope with the scenario events. Detection of a disturbance was often delayed. Diagnostic episodes were rare because of pilots' knowledge gaps and time criticality. In many cases, generic inefficient recovery strategies were observed, and pilots relied on high levels of automation to manage the consequences of an error. Our findings describe and explain the nature and shortcomings of pilots' error management activities. They highlight the need for improved automation training and design to achieve more timely detection, accurate explanation, and effective recovery from errors and disturbances. Our findings can inform the design of tools and techniques that support disturbance management in various complex, event-driven environments.
Green, Jonathan D; Annunziata, Anthony; Kleiman, Sarah E; Bovin, Michelle J; Harwell, Aaron M; Fox, Annie M L; Black, Shimrit K; Schnurr, Paula P; Holowka, Darren W; Rosen, Raymond C; Keane, Terence M; Marx, Brian P
2017-08-01
Posttraumatic stress disorder (PTSD) diagnostic criteria have been criticized for including symptoms that overlap with commonly comorbid disorders, which critics argue undermines the validity of the diagnosis and inflates psychiatric comorbidity rates. In response, the upcoming 11th edition of the International Classification of Diseases (ICD-11) will offer PTSD diagnostic criteria that are intended to promote diagnostic accuracy. However, diagnostic utility analyses have not yet assessed whether these criteria minimize diagnostic errors. The present study examined the diagnostic utility of each PTSD symptom in the fifth edition of the Diagnostic and Statistical Manual for Mental Disorders (DSM-5) for males and females. Participants were 1,347 individuals enrolled in a longitudinal national registry of returning veterans receiving care at a Department of Veterans Affairs (VA) facility. Doctoral level clinicians assessed all participants using the PTSD module of the Structured Clinical Interview for DSM. Of the 20 symptoms examined, the majority performed in the fair to poor range on test quality indices. Although a few items did perform in the good (or better) range, only half were ICD-11 symptoms. None of the 20 symptoms demonstrated good quality of efficiency. Results demonstrated few sex differences across indices. There were no differences in the proportion of comorbid psychiatric disorders or functional impairment between DSM-5 and ICD-11 criteria. ICD-11 PTSD criteria demonstrate neither greater diagnostic specificity nor reduced rates of comorbidity relative to DSM-5 criteria and, as such, do not perform as intended. Modifications to existing symptoms or new symptoms may improve differential diagnosis. © 2017 Wiley Periodicals, Inc.
Understanding seasonal variability of uncertainty in hydrological prediction
NASA Astrophysics Data System (ADS)
Li, M.; Wang, Q. J.
2012-04-01
Understanding uncertainty in hydrological prediction can be highly valuable for improving the reliability of streamflow prediction. In this study, a monthly water balance model, WAPABA, in a Bayesian joint probability with error models are presented to investigate the seasonal dependency of prediction error structure. A seasonal invariant error model, analogous to traditional time series analysis, uses constant parameters for model error and account for no seasonal variations. In contrast, a seasonal variant error model uses a different set of parameters for bias, variance and autocorrelation for each individual calendar month. Potential connection amongst model parameters from similar months is not considered within the seasonal variant model and could result in over-fitting and over-parameterization. A hierarchical error model further applies some distributional restrictions on model parameters within a Bayesian hierarchical framework. An iterative algorithm is implemented to expedite the maximum a posterior (MAP) estimation of a hierarchical error model. Three error models are applied to forecasting streamflow at a catchment in southeast Australia in a cross-validation analysis. This study also presents a number of statistical measures and graphical tools to compare the predictive skills of different error models. From probability integral transform histograms and other diagnostic graphs, the hierarchical error model conforms better to reliability when compared to the seasonal invariant error model. The hierarchical error model also generally provides the most accurate mean prediction in terms of the Nash-Sutcliffe model efficiency coefficient and the best probabilistic prediction in terms of the continuous ranked probability score (CRPS). The model parameters of the seasonal variant error model are very sensitive to each cross validation, while the hierarchical error model produces much more robust and reliable model parameters. Furthermore, the result of the hierarchical error model shows that most of model parameters are not seasonal variant except for error bias. The seasonal variant error model is likely to use more parameters than necessary to maximize the posterior likelihood. The model flexibility and robustness indicates that the hierarchical error model has great potential for future streamflow predictions.
Diagnostics in the Extendable Integrated Support Environment (EISE)
NASA Technical Reports Server (NTRS)
Brink, James R.; Storey, Paul
1988-01-01
Extendable Integrated Support Environment (EISE) is a real-time computer network consisting of commercially available hardware and software components to support systems level integration, modifications, and enhancement to weapons systems. The EISE approach offers substantial potential savings by eliminating unique support environments in favor of sharing common modules for the support of operational weapon systems. An expert system is being developed that will help support diagnosing faults in this network. This is a multi-level, multi-expert diagnostic system that uses experiential knowledge relating symptoms to faults and also reasons from structural and functional models of the underlying physical model when experiential reasoning is inadequate. The individual expert systems are orchestrated by a supervisory reasoning controller, a meta-level reasoner which plans the sequence of reasoning steps to solve the given specific problem. The overall system, termed the Diagnostic Executive, accesses systems level performance checks and error reports, and issues remote test procedures to formulate and confirm fault hypotheses.
Bayesian tomography and integrated data analysis in fusion diagnostics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Dong, E-mail: lid@swip.ac.cn; Dong, Y. B.; Deng, Wei
2016-11-15
In this article, a Bayesian tomography method using non-stationary Gaussian process for a prior has been introduced. The Bayesian formalism allows quantities which bear uncertainty to be expressed in the probabilistic form so that the uncertainty of a final solution can be fully resolved from the confidence interval of a posterior probability. Moreover, a consistency check of that solution can be performed by checking whether the misfits between predicted and measured data are reasonably within an assumed data error. In particular, the accuracy of reconstructions is significantly improved by using the non-stationary Gaussian process that can adapt to the varyingmore » smoothness of emission distribution. The implementation of this method to a soft X-ray diagnostics on HL-2A has been used to explore relevant physics in equilibrium and MHD instability modes. This project is carried out within a large size inference framework, aiming at an integrated analysis of heterogeneous diagnostics.« less
Dissociable Genetic Contributions to Error Processing: A Multimodal Neuroimaging Study
Agam, Yigal; Vangel, Mark; Roffman, Joshua L.; Gallagher, Patience J.; Chaponis, Jonathan; Haddad, Stephen; Goff, Donald C.; Greenberg, Jennifer L.; Wilhelm, Sabine; Smoller, Jordan W.; Manoach, Dara S.
2014-01-01
Background Neuroimaging studies reliably identify two markers of error commission: the error-related negativity (ERN), an event-related potential, and functional MRI activation of the dorsal anterior cingulate cortex (dACC). While theorized to reflect the same neural process, recent evidence suggests that the ERN arises from the posterior cingulate cortex not the dACC. Here, we tested the hypothesis that these two error markers also have different genetic mediation. Methods We measured both error markers in a sample of 92 comprised of healthy individuals and those with diagnoses of schizophrenia, obsessive-compulsive disorder or autism spectrum disorder. Participants performed the same task during functional MRI and simultaneously acquired magnetoencephalography and electroencephalography. We examined the mediation of the error markers by two single nucleotide polymorphisms: dopamine D4 receptor (DRD4) C-521T (rs1800955), which has been associated with the ERN and methylenetetrahydrofolate reductase (MTHFR) C677T (rs1801133), which has been associated with error-related dACC activation. We then compared the effects of each polymorphism on the two error markers modeled as a bivariate response. Results We replicated our previous report of a posterior cingulate source of the ERN in healthy participants in the schizophrenia and obsessive-compulsive disorder groups. The effect of genotype on error markers did not differ significantly by diagnostic group. DRD4 C-521T allele load had a significant linear effect on ERN amplitude, but not on dACC activation, and this difference was significant. MTHFR C677T allele load had a significant linear effect on dACC activation but not ERN amplitude, but the difference in effects on the two error markers was not significant. Conclusions DRD4 C-521T, but not MTHFR C677T, had a significant differential effect on two canonical error markers. Together with the anatomical dissociation between the ERN and error-related dACC activation, these findings suggest that these error markers have different neural and genetic mediation. PMID:25010186
van Karnebeek, Clara D M; Stockler, Sylvia
2012-03-01
Intellectual disability ('developmental delay' at age<5 years) affects 2.5% of population worldwide. Recommendations to investigate genetic causes of intellectual disability are based on frequencies of single conditions and on the yield of diagnostic methods, rather than availability of causal therapy. Inborn errors of metabolism constitute a subgroup of rare genetic conditions for which an increasing number of treatments has become available. To identify all currently treatable inborn errors of metabolism presenting with predominantly intellectual disability, we performed a systematic literature review. We applied Cochrane Collaboration guidelines in formulation of PICO and definitions, and searched in Pubmed (1960-2011) and relevant (online) textbooks to identify 'all inborn errors of metabolism presenting with intellectual disability as major feature'. We assessed levels of evidence of treatments and characterised the effect of treatments on IQ/development and related outcomes. We identified a total of 81 'treatable inborn errors of metabolism' presenting with intellectual disability as a major feature, including disorders of amino acids (n=12), cholesterol and bile acid (n=2), creatine (n=3), fatty aldehydes (n=1); glucose homeostasis and transport (n=2); hyperhomocysteinemia (n=7); lysosomes (n=12), metals (n=3), mitochondria (n=2), neurotransmission (n=7); organic acids (n=19), peroxisomes (n=1), pyrimidines (n=2), urea cycle (n=7), and vitamins/co-factors (n=8). 62% (n=50) of all disorders are identified by metabolic screening tests in blood (plasma amino acids, homocysteine) and urine (creatine metabolites, glycosaminoglycans, oligosaccharides, organic acids, pyrimidines). For the remaining disorders (n=31) a 'single test per single disease' approach including primary molecular analysis is required. Therapeutic modalities include: sick-day management, diet, co-factor/vitamin supplements, substrate inhibition, stemcell transplant, gene therapy. Therapeutic effects include improvement and/or stabilisation of psychomotor/cognitive development, behaviour/psychiatric disturbances, seizures, neurologic and systemic manifestations. The levels of available evidence for the various treatments range from Level 1b,c (n=5); Level 2a,b,c (n=14); Level 4 (n=45), Level 4-5 (n=27). In clinical practice more than 60% of treatments with evidence level 4-5 is internationally accepted as 'standard of care'. This literature review generated the evidence to prioritise treatability in the diagnostic evaluation of intellectual disability. Our results were translated into digital information tools for the clinician (www.treatable-id.org), which are part of a diagnostic protocol, currently implemented for evaluation of effectiveness in our institution. Treatments for these disorders are relatively accessible, affordable and with acceptable side-effects. Evidence for the majority of the therapies is limited however; international collaborations, patient registries, and novel trial methodologies are key in turning the tide for rare diseases such as these. Copyright © 2011 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Poluyan, A. Y.; Fugarov, D. D.; Purchina, O. A.; Nesterchuk, V. V.; Smirnova, O. V.; Petrenkova, S. B.
2018-05-01
To date, the problems associated with the detection of errors in digital equipment (DE) systems for the automation of explosive objects of the oil and gas complex are extremely actual. Especially this problem is actual for facilities where a violation of the accuracy of the DE will inevitably lead to man-made disasters and essential material damage, at such facilities, the diagnostics of the accuracy of the DE operation is one of the main elements of the industrial safety management system. In the work, the solution of the problem of selecting the optimal variant of the errors detection system of errors detection by a validation criterion. Known methods for solving these problems have an exponential valuation of labor intensity. Thus, with a view to reduce time for solving the problem, a validation criterion is compiled as an adaptive bionic algorithm. Bionic algorithms (BA) have proven effective in solving optimization problems. The advantages of bionic search include adaptability, learning ability, parallelism, the ability to build hybrid systems based on combining. [1].
Taking error into account when fitting models using Approximate Bayesian Computation.
van der Vaart, Elske; Prangle, Dennis; Sibly, Richard M
2018-03-01
Stochastic computer simulations are often the only practical way of answering questions relating to ecological management. However, due to their complexity, such models are difficult to calibrate and evaluate. Approximate Bayesian Computation (ABC) offers an increasingly popular approach to this problem, widely applied across a variety of fields. However, ensuring the accuracy of ABC's estimates has been difficult. Here, we obtain more accurate estimates by incorporating estimation of error into the ABC protocol. We show how this can be done where the data consist of repeated measures of the same quantity and errors may be assumed to be normally distributed and independent. We then derive the correct acceptance probabilities for a probabilistic ABC algorithm, and update the coverage test with which accuracy is assessed. We apply this method, which we call error-calibrated ABC, to a toy example and a realistic 14-parameter simulation model of earthworms that is used in environmental risk assessment. A comparison with exact methods and the diagnostic coverage test show that our approach improves estimation of parameter values and their credible intervals for both models. © 2017 by the Ecological Society of America.
Error field optimization in DIII-D using extremum seeking control
NASA Astrophysics Data System (ADS)
Lanctot, M. J.; Olofsson, K. E. J.; Capella, M.; Humphreys, D. A.; Eidietis, N.; Hanson, J. M.; Paz-Soldan, C.; Strait, E. J.; Walker, M. L.
2016-07-01
DIII-D experiments have demonstrated a new real-time approach to tokamak error field control based on maximizing the toroidal angular momentum. This approach uses extremum seeking control theory to optimize the error field in real time without inducing instabilities. Slowly-rotating n = 1 fields (the dither), generated by external coils, are used to perturb the angular momentum, monitored in real-time using a charge-exchange spectroscopy diagnostic. Simple signal processing of the rotation measurements extracts information about the rotation gradient with respect to the control coil currents. This information is used to converge the control coil currents to a point that maximizes the toroidal angular momentum. The technique is well-suited for multi-coil, multi-harmonic error field optimizations in disruption sensitive devices as it does not require triggering locked tearing modes or plasma current disruptions. Control simulations highlight the importance of the initial search direction on the rate of the convergence, and identify future algorithm upgrades that may allow more rapid convergence that projects to convergence times in ITER on the order of tens of seconds.
BEATBOX v1.0: Background Error Analysis Testbed with Box Models
NASA Astrophysics Data System (ADS)
Knote, Christoph; Barré, Jérôme; Eckl, Max
2018-02-01
The Background Error Analysis Testbed (BEATBOX) is a new data assimilation framework for box models. Based on the BOX Model eXtension (BOXMOX) to the Kinetic Pre-Processor (KPP), this framework allows users to conduct performance evaluations of data assimilation experiments, sensitivity analyses, and detailed chemical scheme diagnostics from an observation simulation system experiment (OSSE) point of view. The BEATBOX framework incorporates an observation simulator and a data assimilation system with the possibility of choosing ensemble, adjoint, or combined sensitivities. A user-friendly, Python-based interface allows for the tuning of many parameters for atmospheric chemistry and data assimilation research as well as for educational purposes, for example observation error, model covariances, ensemble size, perturbation distribution in the initial conditions, and so on. In this work, the testbed is described and two case studies are presented to illustrate the design of a typical OSSE experiment, data assimilation experiments, a sensitivity analysis, and a method for diagnosing model errors. BEATBOX is released as an open source tool for the atmospheric chemistry and data assimilation communities.
Disrupted prediction errors index social deficits in autism spectrum disorder
Balsters, Joshua H; Apps, Matthew A J; Bolis, Dimitris; Lehner, Rea; Gallagher, Louise; Wenderoth, Nicole
2017-01-01
Abstract Social deficits are a core symptom of autism spectrum disorder; however, the perturbed neural mechanisms underpinning these deficits remain unclear. It has been suggested that social prediction errors—coding discrepancies between the predicted and actual outcome of another’s decisions—might play a crucial role in processing social information. While the gyral surface of the anterior cingulate cortex signalled social prediction errors in typically developing individuals, this crucial social signal was altered in individuals with autism spectrum disorder. Importantly, the degree to which social prediction error signalling was aberrant correlated with diagnostic measures of social deficits. Effective connectivity analyses further revealed that, in typically developing individuals but not in autism spectrum disorder, the magnitude of social prediction errors was driven by input from the ventromedial prefrontal cortex. These data provide a novel insight into the neural substrates underlying autism spectrum disorder social symptom severity, and further research into the gyral surface of the anterior cingulate cortex and ventromedial prefrontal cortex could provide more targeted therapies to help ameliorate social deficits in autism spectrum disorder. PMID:28031223
Amoretti, M Cristina; Lalumera, Elisabetta
2018-05-30
The general concept of mental disorder specified in the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders is definitional in character: a mental disorder might be identified with a harmful dysfunction. The manual also contains the explicit claim that each individual mental disorder should meet the requirements posed by the definition. The aim of this article is two-fold. First, we shall analyze the definition of the superordinate concept of mental disorder to better understand what necessary (and sufficient) criteria actually characterize such a concept. Second, we shall consider the concepts of some individual mental disorders and show that they are in tension with the definition of the superordinate concept, taking pyromania and narcissistic personality disorder as case studies. Our main point is that an unexplained and not-operationalized dysfunction requirement that is included in the general definition, while being systematically violated by the diagnostic criteria of specific mental disorders, is a logical error. Then, either we unpack and operationalize the dysfunction requirement, and include explicit diagnostic criteria that can actually meet it, or we simply drop it.
Frederick, R I
2000-01-01
Mixed group validation (MGV) is offered as an alternative to criterion group validation (CGV) to estimate the true positive and false positive rates of tests and other diagnostic signs. CGV requires perfect confidence about each research participant's status with respect to the presence or absence of pathology. MGV determines diagnostic efficiencies based on group data; knowing an individual's status with respect to pathology is not required. MGV can use relatively weak indicators to validate better diagnostic signs, whereas CGV requires perfect diagnostic signs to avoid error in computing true positive and false positive rates. The process of MGV is explained, and a computer simulation demonstrates the soundness of the procedure. MGV of the Rey 15-Item Memory Test (Rey, 1958) for 723 pre-trial criminal defendants resulted in higher estimates of true positive rates and lower estimates of false positive rates as compared with prior research conducted with CGV. The author demonstrates how MGV addresses all the criticisms Rogers (1997b) outlined for differential prevalence designs in malingering detection research. Copyright 2000 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Lindsey, Tony; Pecheur, Charles
2004-01-01
Livingstone PathFinder (LPF) is a simulation-based computer program for verifying autonomous diagnostic software. LPF is designed especially to be applied to NASA s Livingstone computer program, which implements a qualitative-model-based algorithm that diagnoses faults in a complex automated system (e.g., an exploratory robot, spacecraft, or aircraft). LPF forms a software test bed containing a Livingstone diagnosis engine, embedded in a simulated operating environment consisting of a simulator of the system to be diagnosed by Livingstone and a driver program that issues commands and faults according to a nondeterministic scenario provided by the user. LPF runs the test bed through all executions allowed by the scenario, checking for various selectable error conditions after each step. All components of the test bed are instrumented, so that execution can be single-stepped both backward and forward. The architecture of LPF is modular and includes generic interfaces to facilitate substitution of alternative versions of its different parts. Altogether, LPF provides a flexible, extensible framework for simulation-based analysis of diagnostic software; these characteristics also render it amenable to application to diagnostic programs other than Livingstone.
Negeri, Zelalem F; Shaikh, Mateen; Beyene, Joseph
2018-05-11
Diagnostic or screening tests are widely used in medical fields to classify patients according to their disease status. Several statistical models for meta-analysis of diagnostic test accuracy studies have been developed to synthesize test sensitivity and specificity of a diagnostic test of interest. Because of the correlation between test sensitivity and specificity, modeling the two measures using a bivariate model is recommended. In this paper, we extend the current standard bivariate linear mixed model (LMM) by proposing two variance-stabilizing transformations: the arcsine square root and the Freeman-Tukey double arcsine transformation. We compared the performance of the proposed methods with the standard method through simulations using several performance measures. The simulation results showed that our proposed methods performed better than the standard LMM in terms of bias, root mean square error, and coverage probability in most of the scenarios, even when data were generated assuming the standard LMM. We also illustrated the methods using two real data sets. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Aulenbach, Brent T.; Burns, Douglas A.; Shanley, James B.; Yanai, Ruth D.; Bae, Kikang; Wild, Adam; Yang, Yang; Yi, Dong
2016-01-01
Estimating streamwater solute loads is a central objective of many water-quality monitoring and research studies, as loads are used to compare with atmospheric inputs, to infer biogeochemical processes, and to assess whether water quality is improving or degrading. In this study, we evaluate loads and associated errors to determine the best load estimation technique among three methods (a period-weighted approach, the regression-model method, and the composite method) based on a solute's concentration dynamics and sampling frequency. We evaluated a broad range of varying concentration dynamics with stream flow and season using four dissolved solutes (sulfate, silica, nitrate, and dissolved organic carbon) at five diverse small watersheds (Sleepers River Research Watershed, VT; Hubbard Brook Experimental Forest, NH; Biscuit Brook Watershed, NY; Panola Mountain Research Watershed, GA; and Río Mameyes Watershed, PR) with fairly high-frequency sampling during a 10- to 11-yr period. Data sets with three different sampling frequencies were derived from the full data set at each site (weekly plus storm/snowmelt events, weekly, and monthly) and errors in loads were assessed for the study period, annually, and monthly. For solutes that had a moderate to strong concentration–discharge relation, the composite method performed best, unless the autocorrelation of the model residuals was <0.2, in which case the regression-model method was most appropriate. For solutes that had a nonexistent or weak concentration–discharge relation (modelR2 < about 0.3), the period-weighted approach was most appropriate. The lowest errors in loads were achieved for solutes with the strongest concentration–discharge relations. Sample and regression model diagnostics could be used to approximate overall accuracies and annual precisions. For the period-weighed approach, errors were lower when the variance in concentrations was lower, the degree of autocorrelation in the concentrations was higher, and sampling frequency was higher. The period-weighted approach was most sensitive to sampling frequency. For the regression-model and composite methods, errors were lower when the variance in model residuals was lower. For the composite method, errors were lower when the autocorrelation in the residuals was higher. Guidelines to determine the best load estimation method based on solute concentration–discharge dynamics and diagnostics are presented, and should be applicable to other studies.
Elschot, Mattijs; Nijsen, Johannes F W; Lam, Marnix G E H; Smits, Maarten L J; Prince, Jip F; Viergever, Max A; van den Bosch, Maurice A A J; Zonnenberg, Bernard A; de Jong, Hugo W A M
2014-10-01
Radiation pneumonitis is a rare but serious complication of radioembolic therapy of liver tumours. Estimation of the mean absorbed dose to the lungs based on pretreatment diagnostic (99m)Tc-macroaggregated albumin ((99m)Tc-MAA) imaging should prevent this, with administered activities adjusted accordingly. The accuracy of (99m)Tc-MAA-based lung absorbed dose estimates was evaluated and compared to absorbed dose estimates based on pretreatment diagnostic (166)Ho-microsphere imaging and to the actual lung absorbed doses after (166)Ho radioembolization. This prospective clinical study included 14 patients with chemorefractory, unresectable liver metastases treated with (166)Ho radioembolization. (99m)Tc-MAA-based and (166)Ho-microsphere-based estimation of lung absorbed doses was performed on pretreatment diagnostic planar scintigraphic and SPECT/CT images. The clinical analysis was preceded by an anthropomorphic torso phantom study with simulated lung shunt fractions of 0 to 30 % to determine the accuracy of the image-based lung absorbed dose estimates after (166)Ho radioembolization. In the phantom study, (166)Ho SPECT/CT-based lung absorbed dose estimates were more accurate (absolute error range 0.1 to -4.4 Gy) than (166)Ho planar scintigraphy-based lung absorbed dose estimates (absolute error range 9.5 to 12.1 Gy). Clinically, the actual median lung absorbed dose was 0.02 Gy (range 0.0 to 0.7 Gy) based on posttreatment (166)Ho-microsphere SPECT/CT imaging. Lung absorbed doses estimated on the basis of pretreatment diagnostic (166)Ho-microsphere SPECT/CT imaging (median 0.02 Gy, range 0.0 to 0.4 Gy) were significantly better predictors of the actual lung absorbed doses than doses estimated on the basis of (166)Ho-microsphere planar scintigraphy (median 10.4 Gy, range 4.0 to 17.3 Gy; p < 0.001), (99m)Tc-MAA SPECT/CT imaging (median 2.5 Gy, range 1.2 to 12.3 Gy; p < 0.001), and (99m)Tc-MAA planar scintigraphy (median 5.5 Gy, range 2.3 to 18.2 Gy; p < 0.001). In clinical practice, lung absorbed doses are significantly overestimated by pretreatment diagnostic (99m)Tc-MAA imaging. Pretreatment diagnostic (166)Ho-microsphere SPECT/CT imaging accurately predicts lung absorbed doses after (166)Ho radioembolization.
Limited-information goodness-of-fit testing of diagnostic classification item response models.
Hansen, Mark; Cai, Li; Monroe, Scott; Li, Zhen
2016-11-01
Despite the growing popularity of diagnostic classification models (e.g., Rupp et al., 2010, Diagnostic measurement: theory, methods, and applications, Guilford Press, New York, NY) in educational and psychological measurement, methods for testing their absolute goodness of fit to real data remain relatively underdeveloped. For tests of reasonable length and for realistic sample size, full-information test statistics such as Pearson's X 2 and the likelihood ratio statistic G 2 suffer from sparseness in the underlying contingency table from which they are computed. Recently, limited-information fit statistics such as Maydeu-Olivares and Joe's (2006, Psychometrika, 71, 713) M 2 have been found to be quite useful in testing the overall goodness of fit of item response theory models. In this study, we applied Maydeu-Olivares and Joe's (2006, Psychometrika, 71, 713) M 2 statistic to diagnostic classification models. Through a series of simulation studies, we found that M 2 is well calibrated across a wide range of diagnostic model structures and was sensitive to certain misspecifications of the item model (e.g., fitting disjunctive models to data generated according to a conjunctive model), errors in the Q-matrix (adding or omitting paths, omitting a latent variable), and violations of local item independence due to unmodelled testlet effects. On the other hand, M 2 was largely insensitive to misspecifications in the distribution of higher-order latent dimensions and to the specification of an extraneous attribute. To complement the analyses of the overall model goodness of fit using M 2 , we investigated the utility of the Chen and Thissen (1997, J. Educ. Behav. Stat., 22, 265) local dependence statistic XLD2 for characterizing sources of misfit, an important aspect of model appraisal often overlooked in favour of overall statements. The XLD2 statistic was found to be slightly conservative (with Type I error rates consistently below the nominal level) but still useful in pinpointing the sources of misfit. Patterns of local dependence arising due to specific model misspecifications are illustrated. Finally, we used the M 2 and XLD2 statistics to evaluate a diagnostic model fit to data from the Trends in Mathematics and Science Study, drawing upon analyses previously conducted by Lee et al., (2011, IJT, 11, 144). © 2016 The British Psychological Society.
2006-01-01
enabling technologies such as built-in-test, advanced health monitoring algorithms, reliability and component aging models, prognostics methods, and...deployment and acceptance. This framework and vision is consistent with the onboard PHM ( Prognostic and Health Management) as well as advanced... monitored . In addition to the prognostic forecasting capabilities provided by monitoring system power, multiple confounding errors by electronic
ERIC Educational Resources Information Center
Mundia, Lawrence
2012-01-01
This mixed-methods study incorporated elements of survey, case study and action research approaches in investigating an at-risk child. Using an in-take interview, a diagnostic test, an error analysis, and a think-aloud clinical interview, the study identified the child's major presenting difficulties. These included: inability to use the four…
NASA Astrophysics Data System (ADS)
Shneider, Mikhail N.
2017-10-01
The ponderomotive perturbation in the interaction region of laser radiation with a low density and low-temperature plasma is considered. Estimates of the perturbation magnitude are determined from the plasma parameters, geometry, intensity, and wavelength of laser radiation. It is shown that ponderomotive perturbations can lead to large errors in the electron density when measured using Thomson scattering.
Virtual sensors for robust on-line monitoring (OLM) and Diagnostics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tipireddy, Ramakrishna; Lerchen, Megan E.; Ramuhalli, Pradeep
Unscheduled shutdown of nuclear power facilities for recalibration and replacement of faulty sensors can be expensive and disruptive to grid management. In this work, we present virtual (software) sensors that can replace a faulty physical sensor for a short duration thus allowing recalibration to be safely deferred to a later time. The virtual sensor model uses a Gaussian process model to process input data from redundant and other nearby sensors. Predicted data includes uncertainty bounds including spatial association uncertainty and measurement noise and error. Using data from an instrumented cooling water flow loop testbed, the virtual sensor model has predictedmore » correct sensor measurements and the associated error corresponding to a faulty sensor.« less
Design of a real-time two-color interferometer for MAST Upgrade
DOE Office of Scientific and Technical Information (OSTI.GOV)
O’Gorman, T., E-mail: thomas.ogorman@ccfe.ac.uk; Naylor, G.; Scannell, R.
2014-11-15
A single chord two-color CO{sub 2}/HeNe (10.6/0.633 μm) heterodyne laser interferometer has been designed to measure the line integral electron density along the mid-plane of the MAST Upgrade tokamak, with a typical error of 1 × 10{sup 18} m{sup −3} (∼2° phase error) at 4 MHz temporal resolution. To ensure this diagnostic system can be restored from any failures without stopping MAST Upgrade operations, it has been located outside of the machine area. The final design and initial testing of this system, including details of the optics, vibration isolation, and a novel phase detection scheme are discussed in this paper.
Accuracy of Noninvasive Estimation Techniques for the State of the Cochlear Amplifier
NASA Astrophysics Data System (ADS)
Dalhoff, Ernst; Gummer, Anthony W.
2011-11-01
Estimation of the function of the cochlea in human is possible only by deduction from indirect measurements, which may be subjective or objective. Therefore, for basic research as well as diagnostic purposes, it is important to develop methods to deduce and analyse error sources of cochlear-state estimation techniques. Here, we present a model of technical and physiologic error sources contributing to the estimation accuracy of hearing threshold and the state of the cochlear amplifier and deduce from measurements of human that the estimated standard deviation can be considerably below 6 dB. Experimental evidence is drawn from two partly independent objective estimation techniques for the auditory signal chain based on measurements of otoacoustic emissions.
Logic design for dynamic and interactive recovery.
NASA Technical Reports Server (NTRS)
Carter, W. C.; Jessep, D. C.; Wadia, A. B.; Schneider, P. R.; Bouricius, W. G.
1971-01-01
Recovery in a fault-tolerant computer means the continuation of system operation with data integrity after an error occurs. This paper delineates two parallel concepts embodied in the hardware and software functions required for recovery; detection, diagnosis, and reconfiguration for hardware, data integrity, checkpointing, and restart for the software. The hardware relies on the recovery variable set, checking circuits, and diagnostics, and the software relies on the recovery information set, audit, and reconstruct routines, to characterize the system state and assist in recovery when required. Of particular utility is a handware unit, the recovery control unit, which serves as an interface between error detection and software recovery programs in the supervisor and provides dynamic interactive recovery.
Cognitive bias in clinical practice - nurturing healthy skepticism among medical students.
Bhatti, Alysha
2018-01-01
Errors in clinical reasoning, known as cognitive biases, are implicated in a significant proportion of diagnostic errors. Despite this knowledge, little emphasis is currently placed on teaching cognitive psychology in the undergraduate medical curriculum. Understanding the origin of these biases and their impact on clinical decision making helps stimulate reflective practice. This article outlines some of the common types of cognitive biases encountered in the clinical setting as well as cognitive debiasing strategies. Medical educators should nurture healthy skepticism among medical students by raising awareness of cognitive biases and equipping them with robust tools to circumvent such biases. This will enable tomorrow's doctors to improve the quality of care delivered, thus optimizing patient outcomes.
Sequential lineup laps and eyewitness accuracy.
Steblay, Nancy K; Dietrich, Hannah L; Ryan, Shannon L; Raczynski, Jeanette L; James, Kali A
2011-08-01
Police practice of double-blind sequential lineups prompts a question about the efficacy of repeated viewings (laps) of the sequential lineup. Two laboratory experiments confirmed the presence of a sequential lap effect: an increase in witness lineup picks from first to second lap, when the culprit was a stranger. The second lap produced more errors than correct identifications. In Experiment 2, lineup diagnosticity was significantly higher for sequential lineup procedures that employed a single versus double laps. Witnesses who elected to view a second lap made significantly more errors than witnesses who chose to stop after one lap or those who were required to view two laps. Witnesses with prior exposure to the culprit did not exhibit a sequential lap effect.
2012-01-01
The translation of knowledge into rational care is as essential and pressing a task as the development of new diagnostic or therapeutic devices, and is arguably more important. The emerging science of health care delivery has identified the central role of human factor ergonomics in the prevention of medical error, omission, and waste. Novel informatics and systems engineering strategies provide an excellent opportunity to improve the design of acute care delivery. In this article, future hospitals are envisioned as organizations built around smart environments that facilitate consistent delivery of effective, equitable, and error-free care focused on patient-centered rather than provider-centered outcomes. PMID:22546172
NIF Ignition Target 3D Point Design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, O; Marinak, M; Milovich, J
2008-11-05
We have developed an input file for running 3D NIF hohlraums that is optimized such that it can be run in 1-2 days on parallel computers. We have incorporated increasing levels of automation into the 3D input file: (1) Configuration controlled input files; (2) Common file for 2D and 3D, different types of capsules (symcap, etc.); and (3) Can obtain target dimensions, laser pulse, and diagnostics settings automatically from NIF Campaign Management Tool. Using 3D Hydra calculations to investigate different problems: (1) Intrinsic 3D asymmetry; (2) Tolerance to nonideal 3D effects (e.g. laser power balance, pointing errors); and (3) Syntheticmore » diagnostics.« less
Two-point motional Stark effect diagnostic for Madison Symmetric Torus
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ko, J.; Den Hartog, D. J.; Caspary, K. J.
2010-10-15
A high-precision spectral motional Stark effect (MSE) diagnostic provides internal magnetic field measurements for Madison Symmetric Torus (MST) plasmas. Currently, MST uses two spatial views - on the magnetic axis and on the midminor (off-axis) radius, the latter added recently. A new analysis scheme has been developed to infer both the pitch angle and the magnitude of the magnetic field from MSE spectra. Systematic errors are reduced by using atomic data from atomic data and analysis structure in the fit. Reconstructed current density and safety factor profiles are more strongly and globally constrained with the addition of the off-axis radiusmore » measurement than with the on-axis one only.« less
Sari, A Akbari; Doshmangir, L; Sheldon, T
2010-01-01
Understanding the nature and causes of medical adverse events may help their prevention. This systematic review explores the types, risk factors, and likely causes of preventable adverse events in the hospital sector. MEDLINE (1970-2008), EMBASE, CINAHL (1970-2005) and the reference lists were used to identify the studies and a structured narrative method used to synthesise the data. Operative adverse events were more common but less preventable and diagnostic adverse events less common but more preventable than other adverse events. Preventable adverse events were often associated with more than one contributory factor. The majority of adverse events were linked to individual human error, and a significant proportion of these caused serious patient harm. Equipment failure was involved in a small proportion of adverse events and rarely caused patient harm. The proportion of system failures varied widely ranging from 3% to 85% depending on the data collection and classification methods used. Operative adverse events are more common but less preventable than diagnostic adverse events. Adverse events are usually associated with more than one contributory factor, the majority are linked to individual human error, and a proportion of these with system failure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
FLANAGAN,A; SCHACHTER,J.M; SCHISSEL,D.P
2003-02-01
A Data Analysis Monitoring (DAM) system has been developed to monitor between pulse physics analysis at the DIII-D National Fusion Facility (http://nssrv1.gat.com:8000/dam). The system allows for rapid detection of discrepancies in diagnostic measurements or the results from physics analysis codes. This enables problems to be detected and possibly fixed between pulses as opposed to after the experimental run has concluded thus increasing the efficiency of experimental time. An example of a consistency check is comparing the experimentally measured neutron rate and the expected neutron emission, RDD0D. A significant difference between these two values could indicate a problem with one ormore » more diagnostics, or the presence of unanticipated phenomena in the plasma. This new system also tracks the progress of MDSplus dispatched data analysis software and the loading of analyzed data into MDSplus. DAM uses a Java Servlet to receive messages, CLIPS to implement expert system logic, and displays its results to multiple web clients via HTML. If an error is detected by DAM, users can view more detailed information so that steps can be taken to eliminate the error for the next pulse.« less
System to monitor data analyses and results of physics data validation between pulses at DIII-D
NASA Astrophysics Data System (ADS)
Flanagan, S.; Schachter, J. M.; Schissel, D. P.
2004-06-01
A data analysis monitoring (DAM) system has been developed to monitor between pulse physics analysis at the DIII-D National Fusion Facility (http://nssrv1.gat.com:8000/dam). The system allows for rapid detection of discrepancies in diagnostic measurements or the results from physics analysis codes. This enables problems to be detected and possibly fixed between pulses as opposed to after the experimental run has concluded, thus increasing the efficiency of experimental time. An example of a consistency check is comparing the experimentally measured neutron rate and the expected neutron emission, RDD0D. A significant difference between these two values could indicate a problem with one or more diagnostics, or the presence of unanticipated phenomena in the plasma. This system also tracks the progress of MDSplus dispatched data analysis software and the loading of analyzed data into MDSplus. DAM uses a Java Servlet to receive messages, C Language Integrated Production System to implement expert system logic, and displays its results to multiple web clients via Hypertext Markup Language. If an error is detected by DAM, users can view more detailed information so that steps can be taken to eliminate the error for the next pulse.
Fais, Paolo; Viero, Alessia; Viel, Guido; Giordano, Renzo; Raniero, Dario; Kusstatscher, Stefano; Giraudo, Chiara; Cecchetto, Giovanni; Montisci, Massimo
2018-04-07
Necrotizing fasciitis (NF) is a life-threatening infection of soft tissues spreading along the fasciae to the surrounding musculature, subcutaneous fat and overlying skin areas that can rapidly lead to septic shock and death. Due to the pandemic increase of medical malpractice lawsuits, above all in Western countries, the forensic pathologist is frequently asked to investigate post-mortem cases of NF in order to determine the cause of death and to identify any related negligence and/or medical error. Herein, we review the medical literature dealing with cases of NF in a post-mortem setting, present a case series of seven NF fatalities and discuss the main ante-mortem and post-mortem diagnostic challenges of both clinical and forensic interests. In particular, we address the following issues: (1) origin of soft tissue infections, (2) micro-organisms involved, (3) time of progression of the infection to NF, (4) clinical and histological staging of NF and (5) pros and cons of clinical and laboratory scores, specific forensic issues related to the reconstruction of the ideal medical conduct and the evaluation of the causal value/link of any eventual medical error.
[Medical expert systems and clinical needs].
Buscher, H P
1991-10-18
The rapid expansion of computer-based systems for problem solving or decision making in medicine, the so-called medical expert systems, emphasize the need for reappraisal of their indication and value. Where specialist knowledge is required, in particular where medical decisions are susceptible to error these systems will probably serve as a valuable support. In the near future computer-based systems should be able to aid the interpretation of findings of technical investigations and the control of treatment, especially where rapid reactions are necessary despite the need of complex analysis of investigated parameters. In the distant future complete support of diagnostic procedures from the history to final diagnosis is possible. It promises to be particularly attractive for the diagnosis of seldom diseases, for difficult differential diagnoses, and in the decision making in the case of expensive, risky or new diagnostic or therapeutic methods. The physician needs to be aware of certain dangers, ranging from misleading information up to abuse. Patient information depends often on subjective reports and error-prone observations. Although basing on problematic knowledge computer-born decisions may have an imperative effect on medical decision making. Also it must be born in mind that medical decisions should always combine the rational with a consideration of human motives.
Kunakorn, M; Raksakai, K; Pracharktam, R; Sattaudom, C
1999-03-01
Our experiences from 1993 to 1997 in the development and use of IS6110 base PCR for the diagnosis of extrapulmonary tuberculosis in a routine clinical setting revealed that error-correcting processes can improve existing diagnostic methodology. The reamplification method initially used had a sensitivity of 90.91% and a specificity of 93.75%. The concern was focused on the false positive results of this method caused by product-carryover contamination. This method was changed to single round PCR with carryover prevention by uracil DNA glycosylase (UDG), resulting in a 100% specificity but only 63% sensitivity. Dot blot hybridization was added after the single round PCR, increasing the sensitivity to 87.50%. However, false positivity resulted from the nonspecific dot blot hybridization signal, reducing the specificity to 89.47%. The hybridization of PCR was changed to a Southern blot with a new oligonucleotide probe giving the sensitivity of 85.71% and raising the specificity to 99.52%. We conclude that the PCR protocol for routine clinical use should include UDG for carryover prevention and hybridization with specific probes to optimize diagnostic sensitivity and specificity in extrapulmonary tuberculosis testing.
Remote maintenance monitoring system
NASA Technical Reports Server (NTRS)
Simpkins, Lorenz G. (Inventor); Owens, Richard C. (Inventor); Rochette, Donn A. (Inventor)
1992-01-01
A remote maintenance monitoring system retrofits to a given hardware device with a sensor implant which gathers and captures failure data from the hardware device, without interfering with its operation. Failure data is continuously obtained from predetermined critical points within the hardware device, and is analyzed with a diagnostic expert system, which isolates failure origin to a particular component within the hardware device. For example, monitoring of a computer-based device may include monitoring of parity error data therefrom, as well as monitoring power supply fluctuations therein, so that parity error and power supply anomaly data may be used to trace the failure origin to a particular plane or power supply within the computer-based device. A plurality of sensor implants may be rerofit to corresponding plural devices comprising a distributed large-scale system. Transparent interface of the sensors to the devices precludes operative interference with the distributed network. Retrofit capability of the sensors permits monitoring of even older devices having no built-in testing technology. Continuous real time monitoring of a distributed network of such devices, coupled with diagnostic expert system analysis thereof, permits capture and analysis of even intermittent failures, thereby facilitating maintenance of the monitored large-scale system.
Grudzińska, Ewa; Modrzejewska, Monika
2018-01-01
Myopia is the most common refractive error and the subject of interest of various studies assessing ocular blood flow. Increasing refractive error and axial elongation of the eye result in the stretching and thinning of the scleral, choroid, and retinal tissues and the decrease in retinal vessel diameter, disturbing ocular blood flow. Local and systemic factors known to change ocular blood flow include glaucoma, medications and fluctuations in intraocular pressure, and metabolic parameters. Techniques and tools assessing ocular blood flow include, among others, laser Doppler flowmetry (LDF), retinal function imager (RFI), laser speckle contrast imaging (LSCI), magnetic resonance imaging (MRI), optical coherence tomography angiography (OCTA), pulsatile ocular blood flowmeter (POBF), fundus pulsation amplitude (FPA), colour Doppler imaging (CDI), and Doppler optical coherence tomography (DOCT). Many researchers consistently reported lower blood flow parameters in myopic eyes regardless of the used diagnostic method. It is unclear whether this is a primary change that causes secondary thinning of ocular tissues or quite the opposite; that is, the mechanical stretching of the eye wall reduces its thickness and causes a secondary lower demand of tissues for oxygen. This paper presents a review of studies assessing ocular blood flow in myopes.
A soft kinetic data structure for lesion border detection.
Kockara, Sinan; Mete, Mutlu; Yip, Vincent; Lee, Brendan; Aydin, Kemal
2010-06-15
The medical imaging and image processing techniques, ranging from microscopic to macroscopic, has become one of the main components of diagnostic procedures to assist dermatologists in their medical decision-making processes. Computer-aided segmentation and border detection on dermoscopic images is one of the core components of diagnostic procedures and therapeutic interventions for skin cancer. Automated assessment tools for dermoscopic images have become an important research field mainly because of inter- and intra-observer variations in human interpretations. In this study, a novel approach-graph spanner-for automatic border detection in dermoscopic images is proposed. In this approach, a proximity graph representation of dermoscopic images in order to detect regions and borders in skin lesion is presented. Graph spanner approach is examined on a set of 100 dermoscopic images whose manually drawn borders by a dermatologist are used as the ground truth. Error rates, false positives and false negatives along with true positives and true negatives are quantified by digitally comparing results with manually determined borders from a dermatologist. The results show that the highest precision and recall rates obtained to determine lesion boundaries are 100%. However, accuracy of assessment averages out at 97.72% and borders errors' mean is 2.28% for whole dataset.
NASA Astrophysics Data System (ADS)
Kostyukov, V. N.; Naumenko, A. P.
2017-08-01
The paper dwells upon urgent issues of evaluating impact of actions conducted by complex technological systems operators on their safe operation considering application of condition monitoring systems for elements and sub-systems of petrochemical production facilities. The main task for the research is to distinguish factors and criteria of monitoring system properties description, which would allow to evaluate impact of errors made by personnel on operation of real-time condition monitoring and diagnostic systems for machinery of petrochemical facilities, and find and objective criteria for monitoring system class, considering a human factor. On the basis of real-time condition monitoring concepts of sudden failure skipping risk, static and dynamic error, monitoring systems, one may solve a task of evaluation of impact that personnel's qualification has on monitoring system operation in terms of error in personnel or operators' actions while receiving information from monitoring systems and operating a technological system. Operator is considered as a part of the technological system. Although, personnel's behavior is usually a combination of the following parameters: input signal - information perceiving, reaction - decision making, response - decision implementing. Based on several researches on behavior of nuclear powers station operators in USA, Italy and other countries, as well as on researches conducted by Russian scientists, required data on operator's reliability were selected for analysis of operator's behavior at technological facilities diagnostics and monitoring systems. The calculations revealed that for the monitoring system selected as an example, the failure skipping risk for the set values of static (less than 0.01) and dynamic (less than 0.001) errors considering all related factors of data on reliability of information perception, decision-making, and reaction fulfilled is 0.037, in case when all the facilities and error probability are under control - not more than 0.027. In case when only pump and compressor units are under control, the failure skipping risk is not more than 0.022, when the probability of error in operator's action is not more than 0.011. The work output shows that on the basis of the researches results an assessment of operators' reliability can be made in terms of almost any kind of production, but considering only technological capabilities, since operators' psychological and general training considerable vary in different production industries. Using latest technologies of engineering psychology and design of data support systems, situation assessment systems, decision-making and responding system, as well as achievement in condition monitoring in various production industries one can evaluate hazardous condition skipping risk probability considering static, dynamic errors and human factor.
The Diagnostic Accuracy of Incisional Biopsy in the Oral Cavity.
Chen, Sara; Forman, Michael; Sadow, Peter M; August, Meredith
2016-05-01
To determine the accuracy of incisional biopsy examination to diagnose oral lesions. This retrospective cohort study was performed to determine the concordance rate between incisional biopsy examination and definitive resection diagnosis for different oral lesions. The study sample was derived from the population of patients who presented to the Department of Oral and Maxillofacial Surgery, Massachusetts General Hospital (Boston, MA) from January 2005 through December 2012. Inclusion criteria were the diagnosis of an oral lesion from an incisional biopsy examination, subsequent diagnosis from the definitive resection of the same lesion, and complete clinical and pathologic patient records. The predictor variables were the origin and size of the lesion. The primary outcome variable was concordance between the provisional incisional biopsy diagnosis and definitive pathologic resection diagnosis. The secondary outcome variable was type of biopsy error for the discordant cases. Incisional biopsy errors were assessed and grouped into 5 categories: 1) sampling error; 2) insufficient tissue for diagnosis; 3) presence of inflammation making diagnosis difficult; 4) artifact; and 5) pathologist discordance. A total of 272 patients met the inclusion criteria. The study sample had a mean age of 47.4 years and 55.7% were women. Of these cases, 242 (88.9%) were concordant when comparing the biopsy and final resection pathology reports. At histologic evaluation, 60.0% of discordant findings were attributed to sampling error, 23.3% to pathologist discrepancy, 13.3% to insufficient tissue provided in the biopsy specimen, and 3.4% to inflammation obscuring diagnosis. Overall, concordant cases had a larger average biopsy volume (1.53 cm(3)) than discordant cases (0.42 cm(3)). The data collected indicate an 88.9% diagnostic concordance with final pathologic results for incisional oral biopsy diagnoses. Sixty percent of discordance was attributed to sampling error when sampled tissue was not representative of the lesion in toto. Multiple-site biopsy specimens and larger-volume samples allowed for a more accurate diagnosis. Copyright © 2016 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Chew, Keng Sheng; Kueh, Yee Cheng; Abdul Aziz, Adlihafizi
2017-03-21
Despite their importance on diagnostic accuracy, there is a paucity of literature on questionnaire tools to assess clinicians' awareness toward cognitive errors. A validation study was conducted to develop a questionnaire tool to evaluate the Clinician's Awareness Towards Cognitive Errors (CATChES) in clinical decision making. This questionnaire is divided into two parts. Part A is to evaluate the clinicians' awareness towards cognitive errors in clinical decision making while Part B is to evaluate their perception towards specific cognitive errors. Content validation for both parts was first determined followed by construct validation for Part A. Construct validation for Part B was not determined as the responses were set in a dichotomous format. For content validation, all items in both Part A and Part B were rated as "excellent" in terms of their relevance in clinical settings. For construct validation using exploratory factor analysis (EFA) for Part A, a two-factor model with total variance extraction of 60% was determined. Two items were deleted. Then, the EFA was repeated showing that all factor loadings are above the cut-off value of >0.5. The Cronbach's alpha for both factors are above 0.6. The CATChES questionnaire tool is a valid questionnaire tool aimed to evaluate the awareness among clinicians toward cognitive errors in clinical decision making.
Interinstitutional review of slides for forensic pathology: types of inconsistencies.
Ersoy, Gokhan; Akyildiz, Elif Ulker; Korkmaz, Gulay; Albek, Emre
2010-09-01
Because of the specific structure of forensic medicine in Turkey, reexamination of histopathologic specimens is a frequent practice. The aim of the present study is the assessment of microscopic diagnostic consistency in forensic pathology between different laboratories. Reports of the Council of Forensic Medicine between 2001 and 2004 were examined, and 150 cases with second pathologic examination were found. Results of histopathologic reports from peripheral laboratories were compared with those made by the Council pathologists with regard to diagnostic consistency. Consistency was assessed in 3 groups and 1 subgroup. Group 1, consistent and minor inconsistency; includes a major consistency subgroup. Group 2, major inconsistency, is the second diagnosis which is lethal; group 3, major inconsistency, is the first diagnosis which is lethal. The lung was found to be the organ with the highest frequency of diagnostic major inconsistency (group 2 and 3) and major consistency. Bronchopneumonia was the most common diagnosis. The brain had the highest frequency of intercenter diagnostic overall consistency (90.2%, group 1). Myocardial infarction was the diagnosis most frequently rejected on reevaluation (group 3). In conclusion, forensic pathology requires different experience than surgical ones. In cases of discrepancy between the anamnesis of the lethal event and pathologic findings, reevaluation of specimen is mandatory to avoid any diagnostic errors. Quality assurance systems with all include internal and external control mechanisms will improve the diagnostic reliability.
Meyer, Alexandria; Proudfit, Greg Hajcak; Bufferd, Sara J; Kujawa, Autumn J; Laptook, Rebecca S; Torpey, Dana C; Klein, Daniel N
2015-07-01
The error-related negativity (ERN) is a negative deflection in the event-related potential (ERP) occurring approximately 50 ms after error commission at fronto-central electrode sites and is thought to reflect the activation of a generic error monitoring system. Several studies have reported an increased ERN in clinically anxious children, and suggest that anxious children are more sensitive to error commission--although the mechanisms underlying this association are not clear. We have previously found that punishing errors results in a larger ERN, an effect that persists after punishment ends. It is possible that learning-related experiences that impact sensitivity to errors may lead to an increased ERN. In particular, punitive parenting might sensitize children to errors and increase their ERN. We tested this possibility in the current study by prospectively examining the relationship between parenting style during early childhood and children's ERN approximately 3 years later. Initially, 295 parents and children (approximately 3 years old) participated in a structured observational measure of parenting behavior, and parents completed a self-report measure of parenting style. At a follow-up assessment approximately 3 years later, the ERN was elicited during a Go/No-Go task, and diagnostic interviews were completed with parents to assess child psychopathology. Results suggested that both observational measures of hostile parenting and self-report measures of authoritarian parenting style uniquely predicted a larger ERN in children 3 years later. We previously reported that children in this sample with anxiety disorders were characterized by an increased ERN. A mediation analysis indicated that ERN magnitude mediated the relationship between harsh parenting and child anxiety disorder. Results suggest that parenting may shape children's error processing through environmental conditioning and thereby risk for anxiety, although future work is needed to confirm this hypothesis.
Aaron, Shawn D; Tan, Wan C; Bourbeau, Jean; Sin, Don D; Loves, Robyn H; MacNeil, Jenna; Whitmore, George A
2017-08-01
Chronic obstructive pulmonary disease (COPD) is a chronic, progressive disease, and reversal of COPD diagnosis is thought to be uncommon. To determine whether a spirometric diagnosis of mild or moderate COPD is subject to variability and potential error. We examined two prospective cohort studies that enrolled subjects with mild to moderate post-bronchodilator airflow obstruction. The Lung Health Study (n = 5,861 subjects; study duration, 5 yr) and the Canadian Cohort of Obstructive Lung Disease (CanCOLD) study (n = 1,551 subjects; study duration, 4 yr) were examined to determine frequencies of (1) diagnostic instability, represented by how often patients initially met criteria for a spirometric diagnosis of COPD but then crossed the diagnostic threshold to normal and then crossed back to COPD over a series of annual visits, or vice versa; and (2) diagnostic reversals, defined as how often an individual's COPD diagnosis at the study outset reversed to normal by the end of the study. Diagnostic instability was common and occurred in 19.5% of the Lung Health Study subjects and 6.4% of the CanCOLD subjects. Diagnostic reversals of COPD from the beginning to the end of the study period occurred in 12.6% and 27.2% of subjects in the Lung Health Study and CanCOLD study, respectively. The risk of diagnostic instability was greatest for subjects whose baseline FEV 1 /FVC value was closest to the diagnostic threshold, and the risk of diagnostic reversal was greatest for subjects who quit smoking during the study. A single post-bronchodilator spirometric assessment may not be reliable for diagnosing COPD in patients with mild to moderate airflow obstruction at baseline.
Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L
2018-01-01
Aims A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R2), using R2 as the primary metric of assay agreement. However, the use of R2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. Methods We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Results Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. Conclusions The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. PMID:28747393
Kranz, R
2015-01-01
Objective: To establish the prevalence of red dot markers in a sample of wrist radiographs and to identify any anatomical and/or pathological characteristics that predict “incorrect” red dot classification. Methods: Accident and emergency (A&E) wrist cases from a digital imaging and communications in medicine/digital teaching library were examined for red dot prevalence and for the presence of several anatomical and pathological features. Binary logistic regression analyses were run to establish if any of these features were predictors of incorrect red dot classification. Results: 398 cases were analysed. Red dot was “incorrectly” classified in 8.5% of cases; 6.3% were “false negatives” (“FNs”)and 2.3% false positives (FPs) (one decimal place). Old fractures [odds ratio (OR), 5.070 (1.256–20.471)] and reported degenerative change [OR, 9.870 (2.300–42.359)] were found to predict FPs. Frykman V [OR, 9.500 (1.954–46.179)], Frykman VI [OR, 6.333 (1.205–33.283)] and non-Frykman positive abnormalities [OR, 4.597 (1.264–16.711)] predict “FNs”. Old fractures and Frykman VI were predictive of error at 90% confidence interval (CI); the rest at 95% CI. Conclusion: The five predictors of incorrect red dot classification may inform the image interpretation training of radiographers and other professionals to reduce diagnostic error. Verification with larger samples would reinforce these findings. Advances in knowledge: All healthcare providers strive to eradicate diagnostic error. By examining specific anatomical and pathological predictors on radiographs for such error, as well as extrinsic factors that may affect reporting accuracy, image interpretation training can focus on these “problem” areas and influence which radiographic abnormality detection schemes are appropriate to implement in A&E departments. PMID:25496373
Diagnostic pitfalls in sporadic transthyretin familial amyloid polyneuropathy (TTR-FAP).
Planté-Bordeneuve, V; Ferreira, A; Lalu, T; Zaros, C; Lacroix, C; Adams, D; Said, G
2007-08-14
Transthyretin familial amyloid polyneuropathies (TTR-FAPs) are autosomal dominant neuropathies of fatal outcome within 10 years after inaugural symptoms. Late diagnosis in patients who present as nonfamilial cases delays adequate management and genetic counseling. Clinical data of the 90 patients who presented as nonfamilial cases of the 300 patients of our cohort of patients with TTR-FAP were reviewed. They were 21 women and 69 men with a mean age at onset of 61 (extremes: 38 to 78 years) and 17 different mutations of the TTR gene including Val30Met (38 cases), Ser77Tyr (16 cases), Ile107Val (15 cases), and Ser77Phe (5 cases). Initial manifestations included mainly limb paresthesias (49 patients) or pain (17 patients). Walking difficulty and weakness (five patients) and cardiac or gastrointestinal manifestations (five patients), were less common at onset. Mean interval to diagnosis was 4 years (range 1 to 10 years); 18 cases were mistaken for chronic inflammatory demyelinating polyneuropathy, which was the most common diagnostic error. At referral a length-dependent sensory loss affected the lower limbs in 2, all four limbs in 20, and four limbs and anterior trunk in 77 patients. All sensations were affected in 60 patients (67%), while small fiber dysfunction predominated in the others. Severe dysautonomia affected 80 patients (90%), with postural hypotension in 52, gastrointestinal dysfunction in 50, impotence in 58 of 69 men, and sphincter disturbance in 31. Twelve patients required a cardiac pacemaker. Nerve biopsy was diagnostic in 54 of 65 patients and salivary gland biopsy in 20 of 30. Decreased nerve conduction velocity, increased CSF protein, negative biopsy findings, and false immunolabeling of amyloid deposits were the main causes of diagnostic errors. We conclude that DNA testing, which is the most reliable test for TTR-FAP, should be performed in patients with a progressive length-dependent small fiber polyneuropathy of unknown origin, especially when associated with autonomic dysfunction.
Cupek, Rafal; Ziębiński, Adam
2016-01-01
Rheumatoid arthritis is the most common rheumatic disease with arthritis, and causes substantial functional disability in approximately 50% patients after 10 years. Accurate measurement of the disease activity is crucial to provide an adequate treatment and care to the patients. The aim of this study is focused on a computer aided diagnostic system that supports an assessment of synovitis severity. This paper focus on a computer aided diagnostic system that was developed within joint Polish-Norwegian research project related to the automated assessment of the severity of synovitis. Semiquantitative ultrasound with power Doppler is a reliable and widely used method of assessing synovitis. Synovitis is estimated by ultrasound examiner using the scoring system graded from 0 to 3. Activity score is estimated on the basis of the examiner's experience or standardized ultrasound atlases. The method needs trained medical personnel and the result can be affected by a human error. The porotype of a computer-aided diagnostic system and algorithms essential for an analysis of ultrasonic images of finger joints are main scientific output of the MEDUSA project. Medusa Evaluation System prototype uses bone, skin, joint and synovitis area detectors for mutual structural model based evaluation of synovitis. Finally, several algorithms that support the semi-automatic or automatic detection of the bone region were prepared as well as a system that uses the statistical data processing approach in order to automatically localize the regions of interest. Semiquantitative ultrasound with power Doppler is a reliable and widely used method of assessing synovitis. Activity score is estimated on the basis of the examiner's experience and the result can be affected by a human error. In this paper we presented the MEDUSA project which is focused on a computer aided diagnostic system that supports an assessment of synovitis severity.
Rencic, Joseph; Trowbridge, Robert L; Fagan, Mark; Szauter, Karen; Durning, Steven
2017-11-01
Recent reports, including the Institute of Medicine's Improving Diagnosis in Health Care, highlight the pervasiveness and underappreciated harm of diagnostic error, and recommend enhancing health care professional education in diagnostic reasoning. However, little is known about clinical reasoning curricula at US medical schools. To describe clinical reasoning curricula at US medical schools and to determine the attitudes of internal medicine clerkship directors toward teaching of clinical reasoning. Cross-sectional multicenter study. US institutional members of the Clerkship Directors in Internal Medicine (CDIM). Examined responses to a survey that was emailed in May 2015 to CDIM institutional representatives, who reported on their medical school's clinical reasoning curriculum. The response rate was 74% (91/123). Most respondents reported that a structured curriculum in clinical reasoning should be taught in all phases of medical education, including the preclinical years (64/85; 75%), clinical clerkships (76/87; 87%), and the fourth year (75/88; 85%), and that more curricular time should be devoted to the topic. Respondents indicated that most students enter the clerkship with only poor (25/85; 29%) to fair (47/85; 55%) knowledge of key clinical reasoning concepts. Most institutions (52/91; 57%) surveyed lacked sessions dedicated to these topics. Lack of curricular time (59/67, 88%) and faculty expertise in teaching these concepts (53/76, 69%) were identified as barriers. Internal medicine clerkship directors believe that clinical reasoning should be taught throughout the 4 years of medical school, with the greatest emphasis in the clinical years. However, only a minority reported having teaching sessions devoted to clinical reasoning, citing a lack of curricular time and faculty expertise as the largest barriers. Our findings suggest that additional institutional and national resources should be dedicated to developing clinical reasoning curricula to improve diagnostic accuracy and reduce diagnostic error.
Automatic diagnostic system for measuring ocular refractive errors
NASA Astrophysics Data System (ADS)
Ventura, Liliane; Chiaradia, Caio; de Sousa, Sidney J. F.; de Castro, Jarbas C.
1996-05-01
Ocular refractive errors (myopia, hyperopia and astigmatism) are automatic and objectively determined by projecting a light target onto the retina using an infra-red (850 nm) diode laser. The light vergence which emerges from the eye (light scattered from the retina) is evaluated in order to determine the corresponding ametropia. The system basically consists of projecting a target (ring) onto the retina and analyzing the scattered light with a CCD camera. The light scattered by the eye is divided into six portions (3 meridians) by using a mask and a set of six prisms. The distance between the two images provided by each of the meridians, leads to the refractive error of the referred meridian. Hence, it is possible to determine the refractive error at three different meridians, which gives the exact solution for the eye's refractive error (spherical and cylindrical components and the axis of the astigmatism). The computational basis used for the image analysis is a heuristic search, which provides satisfactory calculation times for our purposes. The peculiar shape of the target, a ring, provides a wider range of measurement and also saves parts of the retina from unnecessary laser irradiation. Measurements were done in artificial and in vivo eyes (using cicloplegics) and the results were in good agreement with the retinoscopic measurements.
Modeling and design of a beam emission spectroscopy diagnostic for the negative ion source NIO1
NASA Astrophysics Data System (ADS)
Barbisan, M.; Zaniol, B.; Cavenago, M.; Pasqualotto, R.
2014-02-01
Consorzio RFX and INFN-LNL are building a flexible small ion source (Negative Ion Optimization 1, NIO1) capable of producing about 130 mA of H- ions accelerated at 60 KeV. Aim of the experiment is to test and develop the instrumentation for SPIDER and MITICA, the prototypes, respectively, of the negative ion sources and of the whole neutral beam injectors which will operate in the ITER experiment. As SPIDER and MITICA, NIO1 will be monitored with beam emission spectroscopy (BES), a non-invasive diagnostic based on the analysis of the spectrum of the Hα emission produced by the interaction of the energetic ions with the background gas. Aim of BES is to monitor direction, divergence, and uniformity of the ion beam. The precision of these measurements depends on a number of factors related to the physics of production and acceleration of the negative ions, to the geometry of the beam, and to the collection optics. These elements were considered in a set of codes developed to identify the configuration of the diagnostic which minimizes the measurement errors. The model was already used to design the BES diagnostic for SPIDER and MITICA. The paper presents the model and describes its application to design the BES diagnostic in NIO1.
Second opinion oral pathology referrals in New Zealand.
Seo, B; Hussaini, H M; Rich, A M
2017-04-01
Referral for a second opinion is an important aspect of pathology practice, which reduces the rate of diagnostic error and ensures consistency with diagnoses. The Oral Pathology Centre (OPC) is the only specialist oral diagnostic centre in New Zealand. OPC provides diagnostic services to dentists and dental specialists throughout New Zealand and acts as a referral centre for second opinions for oral pathology specimens that have been sent to anatomical pathologists. The aim of this study was to review second opinion referral cases sent to the OPC over a 15-year period and to assess the levels of concordance between the original and final diagnoses. The findings indicated that the majority of referred cases were odontogenic lesions, followed by connective tissue, epithelial and salivary lesions. The most prevalent diagnoses were ameloblastoma and keratocystic odontogenic tumour, followed by oral squamous cell carcinoma. Discordant diagnoses were recorded in 24% of cases. Diagnostic discrepancies were higher in odontogenic and salivary gland lesions, resulting in the change of diagnoses. Second opinion of oral pathology cases should be encouraged in view of the relative rarity of these lesions in general pathology laboratories and the rates of diagnostic discrepancy, particularly for odontogenic and salivary gland lesions. Copyright © 2017 Royal College of Pathologists of Australasia. Published by Elsevier B.V. All rights reserved.
Elliott, D.G.; Applegate, L.J.; Murray, A.L.; Purcell, M.K.; McKibben, C.L.
2013-01-01
No gold standard assay exhibiting error-free classification of results has been identified for detection of Renibacterium salmoninarum, the causative agent of salmonid bacterial kidney disease. Validation of diagnostic assays for R. salmoninarum has been hindered by its unique characteristics and biology, and difficulties in locating suitable populations of reference test animals. Infection status of fish in test populations is often unknown, and it is commonly assumed that the assay yielding the most positive results has the highest diagnostic accuracy, without consideration of misclassification of results. In this research, quantification of R. salmoninarum in samples by bacteriological culture provided a standardized measure of viable bacteria to evaluate analytical performance characteristics (sensitivity, specificity and repeatability) of non-culture assays in three matrices (phosphate-buffered saline, ovarian fluid and kidney tissue). Non-culture assays included polyclonal enzyme-linked immunosorbent assay (ELISA), direct smear fluorescent antibody technique (FAT), membrane-filtration FAT, nested polymerase chain reaction (nested PCR) and three real-time quantitative PCR assays. Injection challenge of specific pathogen-free Chinook salmon, Oncorhynchus tshawytscha (Walbaum), with R. salmoninarum was used to estimate diagnostic sensitivity and specificity. Results did not identify a single assay demonstrating the highest analytical and diagnostic performance characteristics, but revealed strengths and weaknesses of each test.
FORTRAN multitasking library for use on the ELXSI 6400 and the CRAY XMP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Montry, G.R.
1985-07-16
A library of FORTRAN-based multitasking routines has been written for the ELXSI 6400 and the CRAY XMP. This library is designed to make multitasking codes easily transportable between machines with different hardware configurations. The library provides enhanced error checking and diagnostics over vendor-supplied multitasking intrinsics. The library also contains multitasking control structures not normally supplied by the vendor.
ERIC Educational Resources Information Center
Thorne, John C.; Coggins, Truman
2008-01-01
Background: Foetal Alcohol Spectrum Disorders (FASD) include the range of disabilities that occur in children exposed to alcohol during pregnancy, with Foetal Alcohol Syndrome (FAS) on the severe end of the spectrum. Clinical research has documented a range of cognitive, social, and communication deficits in FASD and it indicates the need for…
Imperfect pathogen detection from non-invasive skin swabs biases disease inference
DiRenzo, Graziella V.; Grant, Evan H. Campbell; Longo, Ana; Che-Castaldo, Christian; Zamudio, Kelly R.; Lips, Karen
2018-01-01
1. Conservation managers rely on accurate estimates of disease parameters, such as pathogen prevalence and infection intensity, to assess disease status of a host population. However, these disease metrics may be biased if low-level infection intensities are missed by sampling methods or laboratory diagnostic tests. These false negatives underestimate pathogen prevalence and overestimate mean infection intensity of infected individuals. 2. Our objectives were two-fold. First, we quantified false negative error rates of Batrachochytrium dendrobatidis on non-invasive skin swabs collected from an amphibian community in El Copé, Panama. We swabbed amphibians twice in sequence, and we used a recently developed hierarchical Bayesian estimator to assess disease status of the population. Second, we developed a novel hierarchical Bayesian model to simultaneously account for imperfect pathogen detection from field sampling and laboratory diagnostic testing. We evaluated the performance of the model using simulations and varying sampling design to quantify the magnitude of bias in estimates of pathogen prevalence and infection intensity. 3. We show that Bd detection probability from skin swabs was related to host infection intensity, where Bd infections < 10 zoospores have < 95% probability of being detected. If imperfect Bd detection was not considered, then Bd prevalence was underestimated by as much as 16%. In the Bd-amphibian system, this indicates a need to correct for imperfect pathogen detection caused by skin swabs in persisting host communities with low-level infections. More generally, our results have implications for study designs in other disease systems, particularly those with similar objectives, biology, and sampling decisions. 4. Uncertainty in pathogen detection is an inherent property of most sampling protocols and diagnostic tests, where the magnitude of bias depends on the study system, type of infection, and false negative error rates. Given that it may be difficult to know this information in advance, we advocate that the most cautious approach is to assume all errors are possible and to accommodate them by adjusting sampling designs. The modeling framework presented here improves the accuracy in estimating pathogen prevalence and infection intensity.
Chapiro, Julius; Wood, Laura D.; Lin, MingDe; Duran, Rafael; Cornish, Toby; Lesage, David; Charu, Vivek; Schernthaner, Rüdiger; Wang, Zhijun; Tacher, Vania; Savic, Lynn Jeanette; Kamel, Ihab R.
2014-01-01
Purpose To evaluate the diagnostic performance of three-dimensional (3Dthree-dimensional) quantitative enhancement-based and diffusion-weighted volumetric magnetic resonance (MR) imaging assessment of hepatocellular carcinoma (HCChepatocellular carcinoma) lesions in determining the extent of pathologic tumor necrosis after transarterial chemoembolization (TACEtransarterial chemoembolization). Materials and Methods This institutional review board–approved retrospective study included 17 patients with HCChepatocellular carcinoma who underwent TACEtransarterial chemoembolization before surgery. Semiautomatic 3Dthree-dimensional volumetric segmentation of target lesions was performed at the last MR examination before orthotopic liver transplantation or surgical resection. The amount of necrotic tumor tissue on contrast material–enhanced arterial phase MR images and the amount of diffusion-restricted tumor tissue on apparent diffusion coefficient (ADCapparent diffusion coefficient) maps were expressed as a percentage of the total tumor volume. Visual assessment of the extent of tumor necrosis and tumor response according to European Association for the Study of the Liver (EASLEuropean Association for the Study of the Liver) criteria was performed. Pathologic tumor necrosis was quantified by using slide-by-slide segmentation. Correlation analysis was performed to evaluate the predictive values of the radiologic techniques. Results At histopathologic examination, the mean percentage of tumor necrosis was 70% (range, 10%–100%). Both 3Dthree-dimensional quantitative techniques demonstrated a strong correlation with tumor necrosis at pathologic examination (R2 = 0.9657 and R2 = 0.9662 for quantitative EASLEuropean Association for the Study of the Liver and quantitative ADCapparent diffusion coefficient, respectively) and a strong intermethod agreement (R2 = 0.9585). Both methods showed a significantly lower discrepancy with pathologically measured necrosis (residual standard error [RSEresidual standard error] = 6.38 and 6.33 for quantitative EASLEuropean Association for the Study of the Liver and quantitative ADCapparent diffusion coefficient, respectively), when compared with non-3Dthree-dimensional techniques (RSEresidual standard error = 12.18 for visual assessment). Conclusion This radiologic-pathologic correlation study demonstrates the diagnostic accuracy of 3Dthree-dimensional quantitative MR imaging techniques in identifying pathologically measured tumor necrosis in HCChepatocellular carcinoma lesions treated with TACEtransarterial chemoembolization. © RSNA, 2014 Online supplemental material is available for this article. PMID:25028783
Weight, D G
1998-09-01
This article reviews the persisting difficulty and the importance of the diagnosis of minor head trauma. The diagnosis has been complicated by pervasive disagreement regarding diagnostic criteria. This is primarily a result of the fact that evidence for actual injury is hard to obtain in minor cases because most symptoms tend to be subjective and have high base rates in the normal, uninjured population. At the same time, the diagnostic decision has important implications for patients in terms of treatment, expectancy for future function and lifestyle, and compensation for injuries. Decision theory leads us to the awareness of diagnostic errors. In addition to correct determination, the clinician can make an error of not diagnosing an injury when it has in fact occurred or making a positive diagnosis where there is no injury. The optimal strategy is to set the cutoff at the midpoint of these two error probabilities. The clinician may be willing to make one error rather than the other depending on the cost and bias involved. The second error is more likely to be made when the clinician stands as a strong advocate for the patient and willing to provide any help necessary to encourage treatment, give patients a rationale for understanding their symptoms, and help them obtain compensation for injuries. This can also lead to significant overdiagnosis of injury. The first error is more likely to be made when the clinician recognizes the potential for increasing costs to the health-care industry, the court system, and increasing personal injury claims. He or she may also recognize the vulnerability to the risk for symptom invalidity, the perpetuation of patient symptoms through suggestion, and the need for a biologic explanation for life stressors and preexisting emotional and personality constraints. It can be argued that the most objective diagnostic opinion, uninfluenced by the above biases, should ultimately be in the best interest of the patient, the clinician, legal consultants, and society. Based on the findings in this chapter, at least four symptom constellations can be identified. These have differing probabilities for residual symptoms of minor head trauma and include the following: 1. These patients' symptoms clearly meet the criteria from Table 2. This includes several findings from 1 to 10 of Table 1, together with abnormal neuropsychologic testing on the AIR, General Neuropsychological Deficit Scale, or other indicators of diminished cortical integrity. This group of patients shows a very strong probability of having experienced a brain injury and for showing residual symptoms of minor head trauma. 2. These patients have experienced concussional symptoms (e.g., headache, mild confusion, and balance and visual symptoms) that were documented at the time of injury but sustained no or brief (< 15 seconds) LOC or PTA and, therefore, do not qualify for the diagnosis in Table 2. They may still have several symptoms from Table 1, including objective findings from neuroscanning and variable neuropsychologic testing, especially in measures of attention and delayed recall. This group also shows a high probability for residual, unresolved concussional, and related symptoms. 3. These patients may have shown evidence of concussional symptoms at the time of injury, with no or brief LOC, PTA, or other symptoms from Table 1 (1-10). They continue to show persistent symptoms after 6 months to 1 year. With this group, there is a strong probability that emotional, motivational and premorbid personality factors are either causing or supporting these residual symptoms. 4. In these patients, clearly identifiable postconcussive symptoms at the time of injury are not easy to identify, and perhaps headache is the only reported symptom. There was no LOC or PTA, and virtually none of symptoms 1 to 10 in Table 1 are observed. These patients show strong evidence of symptom invalidity on MMPI-2 or other measures, and marked somatoform, depression, anx
Technical errors in planar bone scanning.
Naddaf, Sleiman Y; Collier, B David; Elgazzar, Abdelhamid H; Khalil, Magdy M
2004-09-01
Optimal technique for planar bone scanning improves image quality, which in turn improves diagnostic efficacy. Because planar bone scanning is one of the most frequently performed nuclear medicine examinations, maintaining high standards for this examination is a daily concern for most nuclear medicine departments. Although some problems such as patient motion are frequently encountered, the degraded images produced by many other deviations from optimal technique are rarely seen in clinical practice and therefore may be difficult to recognize. The objectives of this article are to list optimal techniques for 3-phase and whole-body bone scanning, to describe and illustrate a selection of deviations from these optimal techniques for planar bone scanning, and to explain how to minimize or avoid such technical errors.
NASA Technical Reports Server (NTRS)
1975-01-01
A system is presented which processes FORTRAN based software systems to surface potential problems before they become execution malfunctions. The system complements the diagnostic capabilities of compilers, loaders, and execution monitors rather than duplicating these functions. Also, it emphasizes frequent sources of FORTRAN problems which require inordinate manual effort to identify. The principle value of the system is extracting small sections of unusual code from the bulk of normal sequences. Code structures likely to cause immediate or future problems are brought to the user's attention. These messages stimulate timely corrective action of solid errors and promote identification of 'tricky' code. Corrective action may require recoding or simply extending software documentation to explain the unusual technique.
Malpractice claims related to musculoskeletal imaging. Incidence and anatomical location of lesions.
Fileni, Adriano; Fileni, Gaia; Mirk, Paoletta; Magnavita, Giulia; Nicoli, Marzia; Magnavita, Nicola
2013-12-01
Failure to detect lesions of the musculoskeletal system is a frequent cause of malpractice claims against radiologists. We examined all the malpractice claims related to alleged errors in musculoskeletal imaging filed against Italian radiologists over a period of 14 years (1993-2006). During the period considered, a total of 416 claims for alleged diagnostic errors relating to the musculoskeletal system were filed against radiologists; of these, 389 (93.5%) concerned failure to report fractures, and 15 (3.6%) failure to diagnose a tumour. Incorrect interpretation of bone pathology is among the most common causes of litigation against radiologists; alone, it accounts for 36.4% of all malpractice claims filed during the observation period. Awareness of this risk should encourage extreme caution and diligence.
Anti-retroviral therapy-induced status epilepticus in "pseudo-HIV serodeconversion".
Etgen, Thorleif; Eberl, Bernhard; Freudenberger, Thomas
2010-01-01
Diligence in the interpretation of results is essential as information gained from the psychiatric patient's history might often be restricted. Nonobservance of established guidelines may lead to a wrong diagnosis, induce a false therapy and result in life-threatening situations. Communication errors between hospitals and doctors and uncritical acceptance of prior diagnoses add substantially to this problem. We present a patient with alcohol-related dementia who received anti-retroviral therapy that promoted a non-convulsive status epilepticus. HIV serodeconversion was considered after our laboratory result yielded a HIV-negative status. Critical review of previous diagnostic investigations revealed several errors in the diagnosis of HIV infection leading to a "pseudo-serodeconversion." Finally, anti-retroviral therapy could be discontinued. Copyright © 2010 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balcazar, Mario D.; Yonehara, Katsuya; Moretti, Alfred
Intense neutrino beam is a unique probe for researching beyond the standard model. Fermilab is the main institution to produce the most powerful and widespectrum neutrino beam. From that respective, a radiation robust beam diagnostic system is a critical element in order to maintain the quality of the neutrino beam. Within this context, a novel radiation-resistive beam profile monitor based on a gasfilled RF cavity is proposed. The goal of this measurement is to study a tunable Qfactor RF cavity to determine the accuracy of the RF signal as a function of the quality factor. Specifically, measurement error of themore » Q-factor in the RF calibration is investigated. Then, the RF system will be improved to minimize signal error.« less
Oral precancerous lesions: Problems of early detection and oral cancer prevention
NASA Astrophysics Data System (ADS)
Gileva, Olga S.; Libik, Tatiana V.; Danilov, Konstantin V.
2016-08-01
The study presents the results of the research in the structure, local and systemic risk factors, peculiarities of the clinical manifestation, and quality of primary diagnosis of precancerous oral mucosa lesions (OMLs). In the study a wide range of OMLs and high (25.4%) proportion of oral precancerous lesions (OPLs) in their structure was indicated. The high percentage of different diagnostic errors and the lack of oncological awareness of dental practitioners, as well as the sharp necessity of inclusion of precancer/cancer early detection techniques into their daily practice were noted. The effectiveness of chemilumenescence system of early OPLs and oral cancer detection was demonstrated, the prospects of infrared thermography as a diagnostic tool were also discussed.
Loopback Tester: a synchronous communications circuit diagnostic device
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maestas, J.H.
1986-07-01
The Loopback Tester is an Intel SBC 86/12A Single Board Computer and an Intel SBC 534 Communications Expansion Board configured and programmed to perform various basic or less. These tests include: (1) Data Communications Equipment (DCE) transmit timing detection (2) data rate measurement (3) instantaneous loopback indication and (4) bit error rate testing. It requires no initial setup after plug in, and can be used to locate the source of communications loss in a circuit. It can also be used to determine when crypto variable mismatch problems are the source of communications loss. This report discusses the functionality of themore » Loopback Tester as a diagnostic device. It also discusses the hardware and software which implements this simple yet reliable device.« less
Kolkutin, V V; Fetisov, V A
2003-12-01
The authors discuss one of the important aspects of military medicolegal laboratory activities connected with the quality control of medical care rendered in the military treatment-and-prophylactic institutions in the nineties of the XX century. The example of medical care defects (MCD) permitted to reveal their nature, causes and sites of origin at pre-hospital (PHS) and hospital (HS) stages. Despite some decrease in the total number of MCD revealed HS defects prevail (more than 75%); the organizational defects at PHS and diagnostic defects at HS are predominant. The main MCD causes are inadequate qualification of medical workers, defects in organization of treatment-and-diagnostic process and inadequate examination of patients.
Winnowing DNA for rare sequences: highly specific sequence and methylation based enrichment.
Thompson, Jason D; Shibahara, Gosuke; Rajan, Sweta; Pel, Joel; Marziali, Andre
2012-01-01
Rare mutations in cell populations are known to be hallmarks of many diseases and cancers. Similarly, differential DNA methylation patterns arise in rare cell populations with diagnostic potential such as fetal cells circulating in maternal blood. Unfortunately, the frequency of alleles with diagnostic potential, relative to wild-type background sequence, is often well below the frequency of errors in currently available methods for sequence analysis, including very high throughput DNA sequencing. We demonstrate a DNA preparation and purification method that through non-linear electrophoretic separation in media containing oligonucleotide probes, achieves 10,000 fold enrichment of target DNA with single nucleotide specificity, and 100 fold enrichment of unmodified methylated DNA differing from the background by the methylation of a single cytosine residue.
Requirements for Minimum Sample Size for Sensitivity and Specificity Analysis
Adnan, Tassha Hilda
2016-01-01
Sensitivity and specificity analysis is commonly used for screening and diagnostic tests. The main issue researchers face is to determine the sufficient sample sizes that are related with screening and diagnostic studies. Although the formula for sample size calculation is available but concerning majority of the researchers are not mathematicians or statisticians, hence, sample size calculation might not be easy for them. This review paper provides sample size tables with regards to sensitivity and specificity analysis. These tables were derived from formulation of sensitivity and specificity test using Power Analysis and Sample Size (PASS) software based on desired type I error, power and effect size. The approaches on how to use the tables were also discussed. PMID:27891446
TU-G-BRD-08: In-Vivo EPID Dosimetry: Quantifying the Detectability of Four Classes of Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ford, E; Phillips, M; Bojechko, C
Purpose: EPID dosimetry is an emerging method for treatment verification and QA. Given that the in-vivo EPID technique is in clinical use at some centers, we investigate the sensitivity and specificity for detecting different classes of errors. We assess the impact of these errors using dose volume histogram endpoints. Though data exist for EPID dosimetry performed pre-treatment, this is the first study quantifying its effectiveness when used during patient treatment (in-vivo). Methods: We analyzed 17 patients; EPID images of the exit dose were acquired and used to reconstruct the planar dose at isocenter. This dose was compared to the TPSmore » dose using a 3%/3mm gamma criteria. To simulate errors, modifications were made to treatment plans using four possible classes of error: 1) patient misalignment, 2) changes in patient body habitus, 3) machine output changes and 4) MLC misalignments. Each error was applied with varying magnitudes. To assess the detectability of the error, the area under a ROC curve (AUC) was analyzed. The AUC was compared to changes in D99 of the PTV introduced by the simulated error. Results: For systematic changes in the MLC leaves, changes in the machine output and patient habitus, the AUC varied from 0.78–0.97 scaling with the magnitude of the error. The optimal gamma threshold as determined by the ROC curve varied between 84–92%. There was little diagnostic power in detecting random MLC leaf errors and patient shifts (AUC 0.52–0.74). Some errors with weak detectability had large changes in D99. Conclusion: These data demonstrate the ability of EPID-based in-vivo dosimetry in detecting variations in patient habitus and errors related to machine parameters such as systematic MLC misalignments and machine output changes. There was no correlation found between the detectability of the error using the gamma pass rate, ROC analysis and the impact on the dose volume histogram. Funded by grant R18HS022244 from AHRQ.« less
Holmes, John B; Dodds, Ken G; Lee, Michael A
2017-03-02
An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix. However, obtaining the prediction error variance-covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance-covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance-covariance matrix of estimated contemporary group fixed effects. In this paper, we show that a correction to the variance-covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance-covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance-covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Our method allows for the calculation of a connectedness measure based on the prediction error variance-covariance matrix by calculating only the variance-covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced.
Observer detection of image degradation caused by irreversible data compression processes
NASA Astrophysics Data System (ADS)
Chen, Ji; Flynn, Michael J.; Gross, Barry; Spizarny, David
1991-05-01
Irreversible data compression methods have been proposed to reduce the data storage and communication requirements of digital imaging systems. In general, the error produced by compression increases as an algorithm''s compression ratio is increased. We have studied the relationship between compression ratios and the detection of induced error using radiologic observers. The nature of the errors was characterized by calculating the power spectrum of the difference image. In contrast with studies designed to test whether detected errors alter diagnostic decisions, this study was designed to test whether observers could detect the induced error. A paired-film observer study was designed to test whether induced errors were detected. The study was conducted with chest radiographs selected and ranked for subtle evidence of interstitial disease, pulmonary nodules, or pneumothoraces. Images were digitized at 86 microns (4K X 5K) and 2K X 2K regions were extracted. A full-frame discrete cosine transform method was used to compress images at ratios varying between 6:1 and 60:1. The decompressed images were reprinted next to the original images in a randomized order with a laser film printer. The use of a film digitizer and a film printer which can reproduce all of the contrast and detail in the original radiograph makes the results of this study insensitive to instrument performance and primarily dependent on radiographic image quality. The results of this study define conditions for which errors associated with irreversible compression cannot be detected by radiologic observers. The results indicate that an observer can detect the errors introduced by this compression algorithm for compression ratios of 10:1 (1.2 bits/pixel) or higher.
Donn, Steven M; McDonnell, William M
2012-01-01
The Institute of Medicine has recommended a change in culture from "name and blame" to patient safety. This will require system redesign to identify and address errors, establish performance standards, and set safety expectations. This approach, however, is at odds with the present medical malpractice (tort) system. The current system is outcomes-based, meaning that health care providers and institutions are often sued despite providing appropriate care. Nevertheless, the focus should remain to provide the safest patient care. Effective peer review may be hindered by the present tort system. Reporting of medical errors is a key piece of peer review and education, and both anonymous reporting and confidential reporting of errors have potential disadvantages. Diagnostic and treatment errors continue to be the leading sources of allegations of malpractice in pediatrics, and the neonatal intensive care unit is uniquely vulnerable. Most errors result from systems failures rather than human error. Risk management can be an effective process to identify, evaluate, and address problems that may injure patients, lead to malpractice claims, and result in financial losses. Risk management identifies risk or potential risk, calculates the probability of an adverse event arising from a risk, estimates the impact of the adverse event, and attempts to control the risk. Implementation of a successful risk management program requires a positive attitude, sufficient knowledge base, and a commitment to improvement. Transparency in the disclosure of medical errors and a strategy of prospective risk management in dealing with medical errors may result in a substantial reduction in medical malpractice lawsuits, lower litigation costs, and a more safety-conscious environment. Thieme Medical Publishers, Inc.
NASA Astrophysics Data System (ADS)
Gao, Jing; Burt, James E.
2017-12-01
This study investigates the usefulness of a per-pixel bias-variance error decomposition (BVD) for understanding and improving spatially-explicit data-driven models of continuous variables in environmental remote sensing (ERS). BVD is a model evaluation method originated from machine learning and have not been examined for ERS applications. Demonstrated with a showcase regression tree model mapping land imperviousness (0-100%) using Landsat images, our results showed that BVD can reveal sources of estimation errors, map how these sources vary across space, reveal the effects of various model characteristics on estimation accuracy, and enable in-depth comparison of different error metrics. Specifically, BVD bias maps can help analysts identify and delineate model spatial non-stationarity; BVD variance maps can indicate potential effects of ensemble methods (e.g. bagging), and inform efficient training sample allocation - training samples should capture the full complexity of the modeled process, and more samples should be allocated to regions with more complex underlying processes rather than regions covering larger areas. Through examining the relationships between model characteristics and their effects on estimation accuracy revealed by BVD for both absolute and squared errors (i.e. error is the absolute or the squared value of the difference between observation and estimate), we found that the two error metrics embody different diagnostic emphases, can lead to different conclusions about the same model, and may suggest different solutions for performance improvement. We emphasize BVD's strength in revealing the connection between model characteristics and estimation accuracy, as understanding this relationship empowers analysts to effectively steer performance through model adjustments.
McFadden, Pam; Crim, Andrew
2016-01-01
Diagnostic errors in primary care contribute to increased morbidity and mortality, and billions in costs each year. Improvements in the way practicing physicians are taught so as to optimally perform differential diagnosis can increase patient safety and lower the costs of care. This study represents a comparison of the effectiveness of two approaches to CME training directed at improving the primary care practitioner's diagnostic capabilities against seven common and important causes of joint pain. Using a convenience sampling methodology, one group of primary care practitioners was trained by a traditional live, expert-led, multimedia-based training activity supplemented with interactive practice opportunities and feedback (control group). The second group was trained online with a multimedia-based training activity supplemented with interactive practice opportunities and feedback delivered by an artificial intelligence-driven simulation/tutor (treatment group). Before their respective instructional intervention, there were no significant differences in the diagnostic performance of the two groups against a battery of case vignettes presenting with joint pain. Using the same battery of case vignettes to assess postintervention diagnostic performance, there was a slight but not statistically significant improvement in the control group's diagnostic accuracy (P = .13). The treatment group, however, demonstrated a significant improvement in accuracy (P < .02; Cohen d, effect size = 0.79). These data indicate that within the context of a CME activity, a significant improvement in diagnostic accuracy can be achieved by the use of a web-delivered, multimedia-based instructional activity supplemented by practice opportunities and feedback delivered by an artificial intelligence-driven simulation/tutor.
Lockhart, Joseph J; Satya-Murti, Saty
2017-11-01
Cognitive effort is an essential part of both forensic and clinical decision-making. Errors occur in both fields because the cognitive process is complex and prone to bias. We performed a selective review of full-text English language literature on cognitive bias leading to diagnostic and forensic errors. Earlier work (1970-2000) concentrated on classifying and raising bias awareness. Recently (2000-2016), the emphasis has shifted toward strategies for "debiasing." While the forensic sciences have focused on the control of misleading contextual cues, clinical debiasing efforts have relied on checklists and hypothetical scenarios. No single generally applicable and effective bias reduction strategy has emerged so far. Generalized attempts at bias elimination have not been particularly successful. It is time to shift focus to the study of errors within specific domains, and how to best communicate uncertainty in order to improve decision making on the part of both the expert and the trier-of-fact. © 2017 American Academy of Forensic Sciences.
Artificial neural networks for processing fluorescence spectroscopy data in skin cancer diagnostics
NASA Astrophysics Data System (ADS)
Lenhardt, L.; Zeković, I.; Dramićanin, T.; Dramićanin, M. D.
2013-11-01
Over the years various optical spectroscopic techniques have been widely used as diagnostic tools in the discrimination of many types of malignant diseases. Recently, synchronous fluorescent spectroscopy (SFS) coupled with chemometrics has been applied in cancer diagnostics. The SFS method involves simultaneous scanning of both emission and excitation wavelengths while keeping the interval of wavelengths (constant-wavelength mode) or frequencies (constant-energy mode) between them constant. This method is fast, relatively inexpensive, sensitive and non-invasive. Total synchronous fluorescence spectra of normal skin, nevus and melanoma samples were used as input for training of artificial neural networks. Two different types of artificial neural networks were trained, the self-organizing map and the feed-forward neural network. Histopathology results of investigated skin samples were used as the gold standard for network output. Based on the obtained classification success rate of neural networks, we concluded that both networks provided high sensitivity with classification errors between 2 and 4%.
Syndrome Diagnosis: Human Intuition or Machine Intelligence?
Braaten, Øivind; Friestad, Johannes
2008-01-01
The aim of this study was to investigate whether artificial intelligence methods can represent objective methods that are essential in syndrome diagnosis. Most syndromes have no external criterion standard of diagnosis. The predictive value of a clinical sign used in diagnosis is dependent on the prior probability of the syndrome diagnosis. Clinicians often misjudge the probabilities involved. Syndromology needs objective methods to ensure diagnostic consistency, and take prior probabilities into account. We applied two basic artificial intelligence methods to a database of machine-generated patients - a ‘vector method’ and a set method. As reference methods we ran an ID3 algorithm, a cluster analysis and a naive Bayes’ calculation on the same patient series. The overall diagnostic error rate for the the vector algorithm was 0.93%, and for the ID3 0.97%. For the clinical signs found by the set method, the predictive values varied between 0.71 and 1.0. The artificial intelligence methods that we used, proved simple, robust and powerful, and represent objective diagnostic methods. PMID:19415142
NASA Astrophysics Data System (ADS)
Galmed, A. H.; Elshemey, Wael M.
2017-08-01
Differentiating between normal, benign and malignant excised breast tissues is one of the major worldwide challenges that need a quantitative, fast and reliable technique in order to avoid personal errors in diagnosis. Laser induced fluorescence (LIF) is a promising technique that has been applied for the characterization of biological tissues including breast tissue. Unfortunately, only few studies have adopted a quantitative approach that can be directly applied for breast tissue characterization. This work provides a quantitative means for such characterization via introduction of several LIF characterization parameters and determining the diagnostic accuracy of each parameter in the differentiation between normal, benign and malignant excised breast tissues. Extensive analysis on 41 lyophilized breast samples using scatter diagrams, cut-off values, diagnostic indices and receiver operating characteristic (ROC) curves, shows that some spectral parameters (peak height and area under the peak) are superior for characterization of normal, benign and malignant breast tissues with high sensitivity (up to 0.91), specificity (up to 0.91) and accuracy ranking (highly accurate).
Is clinical cognition binary or continuous?
Norman, Geoffrey; Monteiro, Sandra; Sherbino, Jonathan
2013-08-01
A dominant theory of clinical reasoning is the so-called "dual processing theory," in which the diagnostic process may proceed through a rapid, unconscious, intuitive process (System 1) or a slow, conceptual, analytical process (System 2). Diagnostic errors are thought to arise primarily from cognitive biases originating in System 1. In this issue, Custers points out that this model is unnecessarily restrictive and that it is more likely that diagnostic tasks may proceed through a variety of mental strategies ranging from "analytical" to "intuitive."The authors of this commentary agree that the notion that System 1 and System 2 processes are somehow in competition and will necessarily lead to different conclusions is unnecessarily restrictive. On the other hand, they argue that there is substantial evidence in support of a dual processing model, and that most objections to dual processing theory can be easily accommodated by simply presuming that both processes operate in concertand that solving any task may rely to varying degrees on both processes.
Segura-Grau, A; Sáez-Fernández, A; Rodríguez-Lorenzo, A; Díaz-Rodríguez, N
2014-01-01
Ultrasound is a non-invasive, accessible, and versatile diagnostic technique that uses high frequency ultrasound waves to define outline the organs of the human body, with no ionising radiation, in real time and with the capacity to visual several planes. The high diagnostic yield of the technique, together with its ease of uses plus the previously mentioned characteristics, has currently made it a routine method in daily medical practice. It is for this reason that the multidisciplinary character of this technique is being strengthened every day. To be able to perform the technique correctly requires knowledge of the physical basis of ultrasound, the method and the equipment, as well as of the human anatomy, in order to have the maximum information possible to avoid diagnostic errors due to poor interpretation or lack of information. Copyright © 2013 Sociedad Española de Médicos de Atención Primaria (SEMERGEN). Publicado por Elsevier España. All rights reserved.
Syndrome diagnosis: human intuition or machine intelligence?
Braaten, Oivind; Friestad, Johannes
2008-01-01
The aim of this study was to investigate whether artificial intelligence methods can represent objective methods that are essential in syndrome diagnosis. Most syndromes have no external criterion standard of diagnosis. The predictive value of a clinical sign used in diagnosis is dependent on the prior probability of the syndrome diagnosis. Clinicians often misjudge the probabilities involved. Syndromology needs objective methods to ensure diagnostic consistency, and take prior probabilities into account. We applied two basic artificial intelligence methods to a database of machine-generated patients - a 'vector method' and a set method. As reference methods we ran an ID3 algorithm, a cluster analysis and a naive Bayes' calculation on the same patient series. The overall diagnostic error rate for the the vector algorithm was 0.93%, and for the ID3 0.97%. For the clinical signs found by the set method, the predictive values varied between 0.71 and 1.0. The artificial intelligence methods that we used, proved simple, robust and powerful, and represent objective diagnostic methods.
Interpretation of fast-ion signals during beam modulation experiments
Heidbrink, W. W.; Collins, C. S.; Stagner, L.; ...
2016-07-22
Fast-ion signals produced by a modulated neutral beam are used to infer fast-ion transport. The measured quantity is the divergence of perturbed fast-ion flux from the phase-space volume measured by the diagnostic, ∇•more » $$\\bar{Γ}$$. Since velocity-space transport often contributes to this divergence, the phase-space sensitivity of the diagnostic (or “weight function”) plays a crucial role in the interpretation of the signal. The source and sink make major contributions to the signal but their effects are accurately modeled by calculations that employ an exponential decay term for the sink. Recommendations for optimal design of a fast-ion transport experiment are given, illustrated by results from DIII-D measurements of fast-ion transport by Alfv´en eigenmodes. Finally, the signal-to-noise ratio of the diagnostic, systematic uncertainties in the modeling of the source and sink, and the non-linearity of the perturbation all contribute to the error in ∇•$$\\bar{Γ}$$.« less
Driving indicators in teens with attention deficit hyperactivity and/or autism spectrum disorder.
Classen, Sherrilene; Monahan, Miriam; Brown, Kiah E; Hernandez, Stephanie
2013-12-01
Motor vehicle crashes are leading causes of death among teens. Those teens with attention-deficit/hyperactivity disorder (ADHD), autism spectrum disorder (ASD), or a dual diagnosis of ADHD/ASD have defining characteristics placing them at a greater risk for crashes. This study examined the between-group demographic, clinical, and simulated driving differences in teens, representing three diagnostic groups, compared to healthy controls (HCs). In this prospective observational study, we used a convenience sample of teens recruited from a variety of community settings. Compared to the 22 HCs (mean age = 14.32, SD = +/-.72), teen drivers representing the diagnostic groups (ADHD/ASD, n = 6, mean age = 15.00, SD = +/-.63; ADHD, n = 9, mean age = 15.00, SD = +/- 1.00; ASD, n = 7, mean age = 15.14, SD = +/-. 1.22) performed poorer on visual function, visual-motor integration, cognition, and motor performance and made more errors on the driving simulator. Teens from diagnostic groups have more deficits driving on a driving simulator and may require a comprehensive driving evaluation.
Regression Analysis of Optical Coherence Tomography Disc Variables for Glaucoma Diagnosis.
Richter, Grace M; Zhang, Xinbo; Tan, Ou; Francis, Brian A; Chopra, Vikas; Greenfield, David S; Varma, Rohit; Schuman, Joel S; Huang, David
2016-08-01
To report diagnostic accuracy of optical coherence tomography (OCT) disc variables using both time-domain (TD) and Fourier-domain (FD) OCT, and to improve the use of OCT disc variable measurements for glaucoma diagnosis through regression analyses that adjust for optic disc size and axial length-based magnification error. Observational, cross-sectional. In total, 180 normal eyes of 112 participants and 180 eyes of 138 participants with perimetric glaucoma from the Advanced Imaging for Glaucoma Study. Diagnostic variables evaluated from TD-OCT and FD-OCT were: disc area, rim area, rim volume, optic nerve head volume, vertical cup-to-disc ratio (CDR), and horizontal CDR. These were compared with overall retinal nerve fiber layer thickness and ganglion cell complex. Regression analyses were performed that corrected for optic disc size and axial length. Area-under-receiver-operating curves (AUROC) were used to assess diagnostic accuracy before and after the adjustments. An index based on multiple logistic regression that combined optic disc variables with axial length was also explored with the aim of improving diagnostic accuracy of disc variables. Comparison of diagnostic accuracy of disc variables, as measured by AUROC. The unadjusted disc variables with the highest diagnostic accuracies were: rim volume for TD-OCT (AUROC=0.864) and vertical CDR (AUROC=0.874) for FD-OCT. Magnification correction significantly worsened diagnostic accuracy for rim variables, and while optic disc size adjustments partially restored diagnostic accuracy, the adjusted AUROCs were still lower. Axial length adjustments to disc variables in the form of multiple logistic regression indices led to a slight but insignificant improvement in diagnostic accuracy. Our various regression approaches were not able to significantly improve disc-based OCT glaucoma diagnosis. However, disc rim area and vertical CDR had very high diagnostic accuracy, and these disc variables can serve to complement additional OCT measurements for diagnosis of glaucoma.
Algorithmic Classification of Five Characteristic Types of Paraphasias.
Fergadiotis, Gerasimos; Gorman, Kyle; Bedrick, Steven
2016-12-01
This study was intended to evaluate a series of algorithms developed to perform automatic classification of paraphasic errors (formal, semantic, mixed, neologistic, and unrelated errors). We analyzed 7,111 paraphasias from the Moss Aphasia Psycholinguistics Project Database (Mirman et al., 2010) and evaluated the classification accuracy of 3 automated tools. First, we used frequency norms from the SUBTLEXus database (Brysbaert & New, 2009) to differentiate nonword errors and real-word productions. Then we implemented a phonological-similarity algorithm to identify phonologically related real-word errors. Last, we assessed the performance of a semantic-similarity criterion that was based on word2vec (Mikolov, Yih, & Zweig, 2013). Overall, the algorithmic classification replicated human scoring for the major categories of paraphasias studied with high accuracy. The tool that was based on the SUBTLEXus frequency norms was more than 97% accurate in making lexicality judgments. The phonological-similarity criterion was approximately 91% accurate, and the overall classification accuracy of the semantic classifier ranged from 86% to 90%. Overall, the results highlight the potential of tools from the field of natural language processing for the development of highly reliable, cost-effective diagnostic tools suitable for collecting high-quality measurement data for research and clinical purposes.
Evaluation and optimization of sampling errors for the Monte Carlo Independent Column Approximation
NASA Astrophysics Data System (ADS)
Räisänen, Petri; Barker, W. Howard
2004-07-01
The Monte Carlo Independent Column Approximation (McICA) method for computing domain-average broadband radiative fluxes is unbiased with respect to the full ICA, but its flux estimates contain conditional random noise. McICA's sampling errors are evaluated here using a global climate model (GCM) dataset and a correlated-k distribution (CKD) radiation scheme. Two approaches to reduce McICA's sampling variance are discussed. The first is to simply restrict all of McICA's samples to cloudy regions. This avoids wasting precious few samples on essentially homogeneous clear skies. Clear-sky fluxes need to be computed separately for this approach, but this is usually done in GCMs for diagnostic purposes anyway. Second, accuracy can be improved by repeated sampling, and averaging those CKD terms with large cloud radiative effects. Although this naturally increases computational costs over the standard CKD model, random errors for fluxes and heating rates are reduced by typically 50% to 60%, for the present radiation code, when the total number of samples is increased by 50%. When both variance reduction techniques are applied simultaneously, globally averaged flux and heating rate random errors are reduced by a factor of #3.
Mandelker, Diana; Schmidt, Ryan J; Ankala, Arunkanth; McDonald Gibson, Kristin; Bowser, Mark; Sharma, Himanshu; Duffy, Elizabeth; Hegde, Madhuri; Santani, Avni; Lebo, Matthew; Funke, Birgit
2016-12-01
Next-generation sequencing (NGS) is now routinely used to interrogate large sets of genes in a diagnostic setting. Regions of high sequence homology continue to be a major challenge for short-read technologies and can lead to false-positive and false-negative diagnostic errors. At the scale of whole-exome sequencing (WES), laboratories may be limited in their knowledge of genes and regions that pose technical hurdles due to high homology. We have created an exome-wide resource that catalogs highly homologous regions that is tailored toward diagnostic applications. This resource was developed using a mappability-based approach tailored to current Sanger and NGS protocols. Gene-level and exon-level lists delineate regions that are difficult or impossible to analyze via standard NGS. These regions are ranked by degree of affectedness, annotated for medical relevance, and classified by the type of homology (within-gene, different functional gene, known pseudogene, uncharacterized noncoding region). Additionally, we provide a list of exons that cannot be analyzed by short-amplicon Sanger sequencing. This resource can help guide clinical test design, supplemental assay implementation, and results interpretation in the context of high homology.Genet Med 18 12, 1282-1289.
Visualization and analysis of pulsed ion beam energy density profile with infrared imaging
NASA Astrophysics Data System (ADS)
Isakova, Y. I.; Pushkarev, A. I.
2018-03-01
Infrared imaging technique was used as a surface temperature-mapping tool to characterize the energy density distribution of intense pulsed ion beams on a thin metal target. The technique enables the measuring of the total ion beam energy and the energy density distribution along the cross section and allows one to optimize the operation of an ion diode and control target irradiation mode. The diagnostics was tested on the TEMP-4M accelerator at TPU, Tomsk, Russia and on the TEMP-6 accelerator at DUT, Dalian, China. The diagnostics was applied in studies of the dynamics of the target cooling in vacuum after irradiation and in the experiments with target ablation. Errors caused by the target ablation and target cooling during measurements have been analyzed. For Fluke Ti10 and Fluke Ti400 infrared cameras, the technique can achieve surface energy density sensitivity of 0.05 J/cm2 and spatial resolution of 1-2 mm. The thermal imaging diagnostics does not require expensive consumed materials. The measurement time does not exceed 0.1 s; therefore, this diagnostics can be used for the prompt evaluation of the energy density distribution of a pulsed ion beam and during automation of the irradiation process.
Brownian motion curve-based textural classification and its application in cancer diagnosis.
Mookiah, Muthu Rama Krishnan; Shah, Pratik; Chakraborty, Chandan; Ray, Ajoy K
2011-06-01
To develop an automated diagnostic methodology based on textural features of the oral mucosal epithelium to discriminate normal and oral submucous fibrosis (OSF). A total of 83 normal and 29 OSF images from histopathologic sections of the oral mucosa are considered. The proposed diagnostic mechanism consists of two parts: feature extraction using Brownian motion curve (BMC) and design ofa suitable classifier. The discrimination ability of the features has been substantiated by statistical tests. An error back-propagation neural network (BPNN) is used to classify OSF vs. normal. In development of an automated oral cancer diagnostic module, BMC has played an important role in characterizing textural features of the oral images. Fisher's linear discriminant analysis yields 100% sensitivity and 85% specificity, whereas BPNN leads to 92.31% sensitivity and 100% specificity, respectively. In addition to intensity and morphology-based features, textural features are also very important, especially in histopathologic diagnosis of oral cancer. In view of this, a set of textural features are extracted using the BMC for the diagnosis of OSF. Finally, a textural classifier is designed using BPNN, which leads to a diagnostic performance with 96.43% accuracy. (Anal Quant
A manifesto for cardiovascular imaging: addressing the human factor†
Fraser, Alan G
2017-01-01
Abstract Our use of modern cardiovascular imaging tools has not kept pace with their technological development. Diagnostic errors are common but seldom investigated systematically. Rather than more impressive pictures, our main goal should be more precise tests of function which we select because their appropriate use has therapeutic implications which in turn have a beneficial impact on morbidity or mortality. We should practise analytical thinking, use checklists to avoid diagnostic pitfalls, and apply strategies that will reduce biases and avoid overdiagnosis. We should develop normative databases, so that we can apply diagnostic algorithms that take account of variations with age and risk factors and that allow us to calculate pre-test probability and report the post-test probability of disease. We should report the imprecision of a test, or its confidence limits, so that reference change values can be considered in daily clinical practice. We should develop decision support tools to improve the quality and interpretation of diagnostic imaging, so that we choose the single best test irrespective of modality. New imaging tools should be evaluated rigorously, so that their diagnostic performance is established before they are widely disseminated; this should be a shared responsibility of manufacturers with clinicians, leading to cost-effective implementation. Trials should evaluate diagnostic strategies against independent reference criteria. We should exploit advances in machine learning to analyse digital data sets and identify those features that best predict prognosis or responses to treatment. Addressing these human factors will reap benefit for patients, while technological advances continue unpredictably. PMID:29029029
ERIC Educational Resources Information Center
Hayiou-Thomas, Marianna E.; Carroll, Julia M.; Leavett, Ruth; Hulme, Charles; Snowling, Margaret J.
2017-01-01
Background: This study considers the role of early speech difficulties in literacy development, in the context of additional risk factors. Method: Children were identified with speech sound disorder (SSD) at the age of 3½ years, on the basis of performance on the Diagnostic Evaluation of Articulation and Phonology. Their literacy skills were…
Optical systems in ergophthalmology
NASA Astrophysics Data System (ADS)
Kovalenko, Valentina; Besedovskaya, Valentina; Paloob, Tamara
1994-02-01
The important part of ergophthalmology is the problem of diagnosing and treatment of refraction errors, accommodation and visual disorders by means of the special optical systems. The using of our diagnostical approach helps to choose the right treatment strategy. Our therapeutical approach permits to normalize the muscle tonus and working capacity of eye accommodation apparatus and gives the possibility to obtain the stable positive results in treatment of the refraction amblyopia as well.
Errichetti, Enzo; Stinco, Giuseppe
2016-07-01
Clinical distinction between pityriasis amiantacea-like tinea capitis and pityriasis amiantacea due to noninfectious inflammatory diseases is a troublesome task, with a significant likelihood of diagnostic errors/delays and prescription of inappropriate therapies. We report a case of pityriasis amiantacea-like tinea capitis with its dermoscopic findings in order to highlight the usefulness of dermoscopy in improving the recognition of such a condition.
Performance specifications for the extra-analytical phases of laboratory testing: Why and how.
Plebani, Mario
2017-07-01
An important priority in the current healthcare scenario should be to address errors in laboratory testing, which account for a significant proportion of diagnostic errors. Efforts made in laboratory medicine to enhance the diagnostic process have been directed toward improving technology, greater volumes and more accurate laboratory tests being achieved, but data collected in the last few years highlight the need to re-evaluate the total testing process (TTP) as the unique framework for improving quality and patient safety. Valuable quality indicators (QIs) and extra-analytical performance specifications are required for guidance in improving all TTP steps. Yet in literature no data are available on extra-analytical performance specifications based on outcomes, and nor is it possible to set any specification using calculations involving biological variability. The collection of data representing the state-of-the-art based on quality indicators is, therefore, underway. The adoption of a harmonized set of QIs, a common data collection and standardised reporting method is mandatory as it will not only allow the accreditation of clinical laboratories according to the International Standard, but also assure guidance for promoting improvement processes and guaranteeing quality care to patients. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Clinical laboratory: bigger is not always better.
Plebani, Mario
2018-06-27
Laboratory services around the world are undergoing substantial consolidation and changes through mechanisms ranging from mergers, acquisitions and outsourcing, primarily based on expectations to improve efficiency, increasing volumes and reducing the cost per test. However, the relationship between volume and costs is not linear and numerous variables influence the end cost per test. In particular, the relationship between volumes and costs does not span the entire platter of clinical laboratories: high costs are associated with low volumes up to a threshold of 1 million test per year. Over this threshold, there is no linear association between volumes and costs, as laboratory organization rather than test volume more significantly affects the final costs. Currently, data on laboratory errors and associated diagnostic errors and risk for patient harm emphasize the need for a paradigmatic shift: from a focus on volumes and efficiency to a patient-centered vision restoring the nature of laboratory services as an integral part of the diagnostic and therapy process. Process and outcome quality indicators are effective tools to measure and improve laboratory services, by stimulating a competition based on intra- and extra-analytical performance specifications, intermediate outcomes and customer satisfaction. Rather than competing with economic value, clinical laboratories should adopt a strategy based on a set of harmonized quality indicators and performance specifications, active laboratory stewardship, and improved patient safety.
[Liability for loss of chance in neurological conditions in the Spanish public healthcare system].
Sardinero-García, Carlos; Santiago-Sáez, Andrés; Bravo-Llatas, M Del Carmen; Perea-Pérez, Bernardo; Albarrán-Juan, M Elena; Labajo-González, Elena; Benito-León, Julián
To analyse the sentences due to loss of chance that were passed by the Contentious-Administrative Court (i.e., in public medicine), in which both the origin of the disease to be treated and the damages were neurological. We analysed the 90 sentences concerning neurological conditions that referred to the concept of loss of chance that were passed in Spain from 2003 (year of the first sentence) until May 2014. Of the 90 sentences, 52 (57.8%) were passed due to diagnostic error and 30 (33.3%), due to inadequate treatment. 72 (80.0%) of the sentences were passed from 2009 onwards, which equates to more than a 300% increase with respect to the 18 (20.0%) issued in the first six years of the study (from 2003 to 2008). Most of the patients (66.7%) were men, and a 61.1% presented sequelae. Hypoxic-ischaemic encephalopathy (14.4%) and spinal cord disorders (14.4%) were the most common conditions to lead to sentencing. The litigant activity due to loss of chance in neurological disease in the Spanish public healthcare system has significantly increased in the last few years. The sentences were mainly passed because of diagnostic error or inadequate treatment. Copyright © 2016 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.
A feasability study of color flow doppler vectorization for automated blood flow monitoring.
Schorer, R; Badoual, A; Bastide, B; Vandebrouck, A; Licker, M; Sage, D
2017-12-01
An ongoing issue in vascular medicine is the measure of the blood flow. Catheterization remains the gold standard measurement method, although non-invasive techniques are an area of intense research. We hereby present a computational method for real-time measurement of the blood flow from color flow Doppler data, with a focus on simplicity and monitoring instead of diagnostics. We then analyze the performance of a proof-of-principle software implementation. We imagined a geometrical model geared towards blood flow computation from a color flow Doppler signal, and we developed a software implementation requiring only a standard diagnostic ultrasound device. Detection performance was evaluated by computing flow and its determinants (flow speed, vessel area, and ultrasound beam angle of incidence) on purposely designed synthetic and phantom-based arterial flow simulations. Flow was appropriately detected in all cases. Errors on synthetic images ranged from nonexistent to substantial depending on experimental conditions. Mean errors on measurements from our phantom flow simulation ranged from 1.2 to 40.2% for angle estimation, and from 3.2 to 25.3% for real-time flow estimation. This study is a proof of concept showing that accurate measurement can be done from automated color flow Doppler signal extraction, providing the industry the opportunity for further optimization using raw ultrasound data.
Targeted Next Generation Sequencing in Patients with Inborn Errors of Metabolism
Yubero, Dèlia; Brandi, Núria; Ormazabal, Aida; Garcia-Cazorla, Àngels; Pérez-Dueñas, Belén; Campistol, Jaime; Ribes, Antonia; Palau, Francesc
2016-01-01
Background Next-generation sequencing (NGS) technology has allowed the promotion of genetic diagnosis and are becoming increasingly inexpensive and faster. To evaluate the utility of NGS in the clinical field, a targeted genetic panel approach was designed for the diagnosis of a set of inborn errors of metabolism (IEM). The final aim of the study was to compare the findings for the diagnostic yield of NGS in patients who presented with consistent clinical and biochemical suspicion of IEM with those obtained for patients who did not have specific biomarkers. Methods The subjects studied (n = 146) were classified into two categories: Group 1 (n = 81), which consisted of patients with clinical and biochemical suspicion of IEM, and Group 2 (n = 65), which consisted of IEM cases with clinical suspicion and unspecific biomarkers. A total of 171 genes were analyzed using a custom targeted panel of genes followed by Sanger validation. Results Genetic diagnosis was achieved in 50% of patients (73/146). In addition, the diagnostic yield obtained for Group 1 was 78% (63/81), and this rate decreased to 15.4% (10/65) in Group 2 (X2 = 76.171; p < 0.0001). Conclusions A rapid and effective genetic diagnosis was achieved in our cohort, particularly the group that had both clinical and biochemical indications for the diagnosis. PMID:27243974
NASA Astrophysics Data System (ADS)
Zhu, Ying; Fearn, Tom; MacKenzie, Gary; Clark, Ben; Dunn, Jason M.; Bigio, Irving J.; Bown, Stephen G.; Lovat, Laurence B.
2009-07-01
Elastic scattering spectroscopy (ESS) may be used to detect high-grade dysplasia (HGD) or cancer in Barrett's esophagus (BE). When spectra are measured in vivo by a hand-held optical probe, variability among replicated spectra from the same site can hinder the development of a diagnostic model for cancer risk. An experiment was carried out on excised tissue to investigate how two potential sources of this variability, pressure and angle, influence spectral variability, and the results were compared with the variations observed in spectra collected in vivo from patients with Barrett's esophagus. A statistical method called error removal by orthogonal subtraction (EROS) was applied to model and remove this measurement variability, which accounted for 96.6% of the variation in the spectra, from the in vivo data. Its removal allowed the construction of a diagnostic model with specificity improved from 67% to 82% (with sensitivity fixed at 90%). The improvement was maintained in predictions on an independent in vivo data set. EROS works well as an effective pretreatment for Barrett's in vivo data by identifying measurement variability and ameliorating its effect. The procedure reduces the complexity and increases the accuracy and interpretability of the model for classification and detection of cancer risk in Barrett's esophagus.
Prashanth, L K; Taly, A B; Sinha, S; Ravi, V
2007-06-01
Subacute sclerosing panencephalitis (SSPE) is a progressive disease caused by wild-type measles virus leading to premature death. Early diagnosis may help in medical interventions and counseling. The aim of this study was to ascertain diagnostic errors and their possible causes. Retrospective case record analysis of patients with subacute sclerosing panencephalitis, evaluated over a 10-year period, was performed. The following data were analyzed: initial symptoms and diagnosis, interval between onset of symptoms to diagnosis, and implications of delayed diagnosis. Among the 307 patients evaluated, initial diagnosis by various health care professionals was other than subacute sclerosing panencephalitis in 242 patients (78.8%). These included seizures, absence seizures, metachromatic leukodystrophy, Schilder's disease, cerebral palsy, hemiparkinsonism, Wilson's disease, vasculitis, spinocerebellar ataxia, motor neuron disease, nutritional amblyopia, tapetoretinal degeneration, catatonic schizophrenia, and malingering, among others. The interval between precise diagnosis and first reported symptom was 6.2 +/- 11.3 months (range, 0.2-96 months; median, 3 months). Forty-four patients (14.3%) who had symptoms for more than 1 year before the precise diagnosis had a protracted course as compared to the rest of the cohort ( P = .0001). Early and accurate diagnosis of subacute sclerosing panencephalitis needs a high index of suspicion.
Concept for tremor compensation for a handheld OCT-laryngoscope
NASA Astrophysics Data System (ADS)
Donner, Sabine; Deutsch, Stefanie; Bleeker, Sebastian; Ripken, Tammo; Krüger, Alexander
2013-06-01
Optical coherence tomography (OCT) is a non-invasive imaging technique which can create optical tissue sections, enabling diagnosis of vocal cord tissue. To take full advantage from the non-contact imaging technique, OCT was adapted to an indirect laryngoscope to work on awake patients. Using OCT in a handheld diagnostic device the challenges of rapid working distance adjustment and tracking of axial motion arise. The optical focus of the endoscopic sample arm and the reference-arm length can be adjusted in a range of 40 mm to 90 mm. Automatic working distance adjustment is based on image analysis of OCT B-scans which identifies off depth images as well as position errors. The movable focal plane and reference plane are used to adjust working distance to match the sample depth and stabilise the sample in the desired axial position of the OCT scans. The autofocus adjusts the working distance within maximum 2.7 seconds for the maximum initial displacement of 40 mm. The amplitude of hand tremor during 60 s handheld scanning was reduced to 50 % and it was shown that the image stabilisation keeps the position error below 0.5 mm. Fast automatic working distance adjustment is crucial to minimise the duration of the diagnostic procedure. The image stabilisation compensates relative axial movements during handheld scanning.
Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L
2018-02-01
A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R 2 ), using R 2 as the primary metric of assay agreement. However, the use of R 2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Nagayama, T.; Mancini, R. C.; Mayes, D.; ...
2015-11-18
Temperature and density asymmetry diagnosis is critical to advance inertial confinement fusion (ICF) science. A multi-monochromatic x-ray imager (MMI) is an attractive diagnostic for this purpose. The MMI records the spectral signature from an ICF implosion core with time resolution, 2-D space resolution, and spectral resolution. While narrow-band images and 2-D space-resolved spectra from the MMI data constrain temperature and density spatial structure of the core, the accuracy of the images and spectra depends not only on the quality of the MMI data but also on the reliability of the post-processing tools. In this paper, we synthetically quantify the accuracymore » of images and spectra reconstructed from MMI data. Errors in the reconstructed images are less than a few percent when the space-resolution effect is applied to the modeled images. The errors in the reconstructed 2-D space-resolved spectra are also less than a few percent except those for the peripheral regions. Spectra reconstructed for the peripheral regions have slightly but systematically lower intensities by ~6% due to the instrumental spatial-resolution effects. However, this does not alter the relative line ratios and widths and thus does not affect the temperature and density diagnostics. We also investigate the impact of the pinhole size variation on the extracted images and spectra. A 10% pinhole size variation could introduce spatial bias to the images and spectra of ~10%. A correction algorithm is developed, and it successfully reduces the errors to a few percent. Finally, it is desirable to perform similar synthetic investigations to fully understand the reliability and limitations of each MMI application.« less
de Gusmão, Claudio M; Guerriero, Réjean M; Bernson-Leung, Miya Elizabeth; Pier, Danielle; Ibeziako, Patricia I; Bujoreanu, Simona; Maski, Kiran P; Urion, David K; Waugh, Jeff L
2014-08-01
In children, functional neurological symptom disorders are frequently the basis for presentation for emergency care. Pediatric epidemiological and outcome data remain scarce. Assess diagnostic accuracy of trainee's first impression in our pediatric emergency room; describe manner of presentation, demographic data, socioeconomic impact, and clinical outcomes, including parental satisfaction. (1) More than 1 year, psychiatry consultations for neurology patients with a functional neurological symptom disorder were retrospectively reviewed. (2) For 3 months, all children whose emergency room presentation suggested the diagnosis were prospectively collected. (3) Three to six months after prospective collection, families completed a structured telephone interview on outcome measures. Twenty-seven patients were retrospectively assessed; 31 patients were prospectively collected. Trainees' accurately predicted the diagnosis in 93% (retrospective) and 94% (prospective) cohorts. Mixed presentations were most common (usually sensory-motor changes, e.g. weakness and/or paresthesias). Associated stressors were mundane and ubiquitous, rarely severe. Families were substantially affected, reporting mean symptom duration 7.4 (standard error of the mean ± 1.33) weeks, missing 22.4 (standard error of the mean ± 5.47) days of school, and 8.3 (standard error of the mean ± 2.88) of parental workdays (prospective cohort). At follow-up, 78% were symptom free. Parental dissatisfaction was rare, attributed to poor rapport and/or insufficient information conveyed. Trainees' clinical impression was accurate in predicting a later diagnosis of functional neurological symptom disorder. Extraordinary life stressors are not required to trigger the disorder in children. Although prognosis is favorable, families incur substantial economic burden and negative educational impact. Improving recognition and appropriately communicating the diagnosis may speed access to treatment and potentially reduce the disability and cost of this disorder. Copyright © 2014 Elsevier Inc. All rights reserved.
Vosoughi, Aram; Smith, Paul Taylor; Zeitouni, Joseph A; Sodeman, Gregori M; Jorda, Merce; Gomez-Fernandez, Carmen; Garcia-Buitrago, Monica; Petito, Carol K; Chapman, Jennifer R; Campuzano-Zuluaga, German; Rosenberg, Andrew E; Kryvenko, Oleksandr N
2018-04-30
Frozen section telepathology interpretation experience has been largely limited to practices with locations significantly distant from one another with sporadic need for frozen section diagnosis. In 2010 we established a real-time non-robotic telepathology system in a very active cancer center for daily frozen section service. Herein, we evaluate its accuracy compared to direct microscopic interpretation performed in the main hospital by the same faculty and its cost-efficiency over a 1-year period. From 643 (1416 parts) cases requiring intraoperative consultation, 333 cases (690 parts) were examined by telepathology and 310 cases (726 parts) by direct microscopy. Corresponding discrepancy rates were 2.6% (18 cases: 6 (0.9%) sampling and 12 (1.7%) diagnostic errors) and 3.2% (23 cases: 8 (1.1%) sampling and 15 (2.1%) diagnostic errors), P=.63. The sensitivity and specificity of intraoperative frozen diagnosis were 0.92 and 0.99, respectively, in telepathology, and 0.90 and 0.99, respectively, in direct microscopy. There was no correlation of error incidence with post graduate year level of residents involved in the telepathology service. Cost analysis indicated that the time saved by telepathology was $19691 over one year of the study period while the capital cost for establishing the system was $8924. Thus, real-time non-robotic telepathology is a reliable and easy to use tool for frozen section evaluation in busy clinical settings, especially when frozen section service involves more than one hospital, and it is cost efficient when travel is a component of the service. Copyright © 2018. Published by Elsevier Inc.
Chanani, Sheila; Wacksman, Jeremy; Deshmukh, Devika; Pantvaidya, Shanti; Fernandez, Armida; Jayaraman, Anuja
2016-12-01
Acute malnutrition is linked to child mortality and morbidity. Community-Based Management of Acute Malnutrition (CMAM) programs can be instrumental in large-scale detection and treatment of undernutrition. The World Health Organization (WHO) 2006 weight-for-height/length tables are diagnostic tools available to screen for acute malnutrition. Frontline workers (FWs) in a CMAM program in Dharavi, Mumbai, were using CommCare, a mobile application, for monitoring and case management of children in combination with the paper-based WHO simplified tables. A strategy was undertaken to digitize the WHO tables into the CommCare application. To measure differences in diagnostic accuracy in community-based screening for acute malnutrition, by FWs, using a mobile-based solution. Twenty-seven FWs initially used the paper-based tables and then switched to an updated mobile application that included a nutritional grade calculator. Human error rates specifically associated with grade classification were calculated by comparison of the grade assigned by the FW to the grade each child should have received based on the same WHO tables. Cohen kappa coefficient, sensitivity and specificity rates were also calculated and compared for paper-based grade assignments and calculator grade assignments. Comparing FWs (N = 14) who completed at least 40 screenings without and 40 with the calculator, the error rates were 5.5% and 0.7%, respectively (p < .0001). Interrater reliability (κ) increased to an almost perfect level (>.90), from .79 to .97, after switching to the mobile calculator. Sensitivity and specificity also improved significantly. The mobile calculator significantly reduces an important component of human error in using the WHO tables to assess acute malnutrition at the community level. © The Author(s) 2016.
Mehrad, Mitra; Chernock, Rebecca D; El-Mofty, Samir K; Lewis, James S
2015-12-01
Medical error is a significant problem in the United States, and pathologic diagnoses are a significant source of errors. Prior studies have shown that second-opinion pathology review results in clinically major diagnosis changes in approximately 0.6% to 5.8% of patients. The few studies specifically on head and neck pathology have suggested rates of changed diagnoses that are even higher. Objectives .- To evaluate the diagnostic discrepancy rates in patients referred to our institution, where all such cases are reviewed by a head and neck subspecialty service, and to identify specific areas with more susceptibility to errors. Five hundred consecutive, scanned head and neck pathology reports from patients referred to our institution were compared for discrepancies between the outside and in-house diagnoses. Major discrepancies were defined as those resulting in a significant change in patient clinical management and/or prognosis. Major discrepancies occurred in 20 cases (4% overall). Informative follow-up material was available on 11 of the 20 patients (55.0%), among whom, the second opinion was supported in 11 of 11 cases (100%). Dysplasia versus invasive squamous cell carcinoma was the most common (7 of 20; 35%) area of discrepancy, and by anatomic subsite, the sinonasal tract (4 of 21; 19.0%) had the highest rate of discrepant diagnoses. Of the major discrepant diagnoses, 12 (12 of 20; 60%) involved a change from benign to malignant, one a change from malignant to benign (1 of 20; 5%), and 6 involved tumor classification (6 of 20; 30%). Head and neck pathology is a relatively high-risk area, prone to erroneous diagnoses in a small fraction of patients. This study supports the importance of second-opinion review by subspecialized pathologists for the best care of patients.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagayama, T.; Mancini, R. C.; Mayes, D.
2015-11-15
Temperature and density asymmetry diagnosis is critical to advance inertial confinement fusion (ICF) science. A multi-monochromatic x-ray imager (MMI) is an attractive diagnostic for this purpose. The MMI records the spectral signature from an ICF implosion core with time resolution, 2-D space resolution, and spectral resolution. While narrow-band images and 2-D space-resolved spectra from the MMI data constrain temperature and density spatial structure of the core, the accuracy of the images and spectra depends not only on the quality of the MMI data but also on the reliability of the post-processing tools. Here, we synthetically quantify the accuracy of imagesmore » and spectra reconstructed from MMI data. Errors in the reconstructed images are less than a few percent when the space-resolution effect is applied to the modeled images. The errors in the reconstructed 2-D space-resolved spectra are also less than a few percent except those for the peripheral regions. Spectra reconstructed for the peripheral regions have slightly but systematically lower intensities by ∼6% due to the instrumental spatial-resolution effects. However, this does not alter the relative line ratios and widths and thus does not affect the temperature and density diagnostics. We also investigate the impact of the pinhole size variation on the extracted images and spectra. A 10% pinhole size variation could introduce spatial bias to the images and spectra of ∼10%. A correction algorithm is developed, and it successfully reduces the errors to a few percent. It is desirable to perform similar synthetic investigations to fully understand the reliability and limitations of each MMI application.« less
Nagayama, T; Mancini, R C; Mayes, D; Tommasini, R; Florido, R
2015-11-01
Temperature and density asymmetry diagnosis is critical to advance inertial confinement fusion (ICF) science. A multi-monochromatic x-ray imager (MMI) is an attractive diagnostic for this purpose. The MMI records the spectral signature from an ICF implosion core with time resolution, 2-D space resolution, and spectral resolution. While narrow-band images and 2-D space-resolved spectra from the MMI data constrain temperature and density spatial structure of the core, the accuracy of the images and spectra depends not only on the quality of the MMI data but also on the reliability of the post-processing tools. Here, we synthetically quantify the accuracy of images and spectra reconstructed from MMI data. Errors in the reconstructed images are less than a few percent when the space-resolution effect is applied to the modeled images. The errors in the reconstructed 2-D space-resolved spectra are also less than a few percent except those for the peripheral regions. Spectra reconstructed for the peripheral regions have slightly but systematically lower intensities by ∼6% due to the instrumental spatial-resolution effects. However, this does not alter the relative line ratios and widths and thus does not affect the temperature and density diagnostics. We also investigate the impact of the pinhole size variation on the extracted images and spectra. A 10% pinhole size variation could introduce spatial bias to the images and spectra of ∼10%. A correction algorithm is developed, and it successfully reduces the errors to a few percent. It is desirable to perform similar synthetic investigations to fully understand the reliability and limitations of each MMI application.
Dual-process cognitive interventions to enhance diagnostic reasoning: a systematic review.
Lambe, Kathryn Ann; O'Reilly, Gary; Kelly, Brendan D; Curristan, Sarah
2016-10-01
Diagnostic error incurs enormous human and economic costs. The dual-process model reasoning provides a framework for understanding the diagnostic process and attributes certain errors to faulty cognitive shortcuts (heuristics). The literature contains many suggestions to counteract these and to enhance analytical and non-analytical modes of reasoning. To identify, describe and appraise studies that have empirically investigated interventions to enhance analytical and non-analytical reasoning among medical trainees and doctors, and to assess their effectiveness. Systematic searches of five databases were carried out (Medline, PsycInfo, Embase, Education Resource Information Centre (ERIC) and Cochrane Database of Controlled Trials), supplemented with searches of bibliographies and relevant journals. Included studies evaluated an intervention to enhance analytical and/or non-analytical reasoning among medical trainees or doctors. Twenty-eight studies were included under five categories: educational interventions, checklists, cognitive forcing strategies, guided reflection, instructions at test and other interventions. While many of the studies found some effect of interventions, guided reflection interventions emerged as the most consistently successful across five studies, and cognitive forcing strategies improved accuracy and confidence judgements. Significant heterogeneity of measurement approaches was observed, and existing studies are largely limited to early-career doctors. Results to date are promising and this relatively young field is now close to a point where these kinds of cognitive interventions can be recommended to educators. Further research with refined methodology and more diverse samples is required before firm recommendations may be made for medical education and policy; however, these results suggest that such interventions hold promise, with much current enthusiasm for new research. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
[Transient ischemic attacks in the elderly: new definition and diagnostic difficulties].
Rancurel, Gérald
2005-03-01
Transient ischemic attacks (TIA) are very frequent in the elderly. Their frequency increases beyond 65 years. However, no epidemiologic study was specifically dedicated to elderly patients. The first definition of TIA was a sudden focal neurologic deficit that lasted for less than 24 hours, presumed to be of vascular origin and located in a specific artery territory of the brain or eye. The Working Study Group has proposed a new definition: TIA is a brief episode of neurologic dysfunction caused by focal brain or retinal ischemia with clinical symptoms typically lasting less than one hour, most often some minutes, and without evidence of acute infarction. Weighted diffusion MRI may show very early an aspect of cytotoxic oedema. The one-hour criterion associated with a stable neurological deficit is requested for initiating IV thrombolysis, if the angio-MRI shows an occlusion of the supra-aortic trunks or intracranial arteries, even in aged patients. Each TIA constitutes a major risk for a completed infarct resulting in disability or death. Hypertension is the main risk factor for TIAs, followed by atrial fibrillation, diabetes, coronaropathy and sedentarity. These factors multiply by 4 the stroke risk. In the elderly, TIAs are pecularly associated with lacunar infarcts in the territory of deep perforating arteries. TIAs represent a neurologic emergency that allows no delay in clinical and laboratory investigations, such as ultrasonic echographies and weighted diffusion MRI. Diagnostic errors are often due to frequent polypathology and cognitive changes in great age. The most misleading symptoms are vertigo, imbalance, falls, disorders of consciousness. Unawareness of the deficit is also a frequent cause of failure of TIA diagnosis. Conversely, the most frequent cause of diagnostic error by excess is epileptic seizures which are often under-evaluated.
Feature Acquisition with Imbalanced Training Data
NASA Technical Reports Server (NTRS)
Thompson, David R.; Wagstaff, Kiri L.; Majid, Walid A.; Jones, Dayton L.
2011-01-01
This work considers cost-sensitive feature acquisition that attempts to classify a candidate datapoint from incomplete information. In this task, an agent acquires features of the datapoint using one or more costly diagnostic tests, and eventually ascribes a classification label. A cost function describes both the penalties for feature acquisition, as well as misclassification errors. A common solution is a Cost Sensitive Decision Tree (CSDT), a branching sequence of tests with features acquired at interior decision points and class assignment at the leaves. CSDT's can incorporate a wide range of diagnostic tests and can reflect arbitrary cost structures. They are particularly useful for online applications due to their low computational overhead. In this innovation, CSDT's are applied to cost-sensitive feature acquisition where the goal is to recognize very rare or unique phenomena in real time. Example applications from this domain include four areas. In stream processing, one seeks unique events in a real time data stream that is too large to store. In fault protection, a system must adapt quickly to react to anticipated errors by triggering repair activities or follow- up diagnostics. With real-time sensor networks, one seeks to classify unique, new events as they occur. With observational sciences, a new generation of instrumentation seeks unique events through online analysis of large observational datasets. This work presents a solution based on transfer learning principles that permits principled CSDT learning while exploiting any prior knowledge of the designer to correct both between-class and withinclass imbalance. Training examples are adaptively reweighted based on a decomposition of the data attributes. The result is a new, nonparametric representation that matches the anticipated attribute distribution for the target events.
Influence of the Atmospheric Model on Hanle Diagnostics
NASA Astrophysics Data System (ADS)
Ishikawa, Ryohko; Uitenbroek, Han; Goto, Motoshi; Iida, Yusuke; Tsuneta, Saku
2018-05-01
We clarify the uncertainty in the inferred magnetic field vector via the Hanle diagnostics of the hydrogen Lyman-α line when the stratification of the underlying atmosphere is unknown. We calculate the anisotropy of the radiation field with plane-parallel semi-empirical models under the nonlocal thermal equilibrium condition and derive linear polarization signals for all possible parameters of magnetic field vectors based on an analytical solution of the atomic polarization and Hanle effect. We find that the semi-empirical models of the inter-network region (FAL-A) and network region (FAL-F) show similar degrees of anisotropy in the radiation field, and this similarity results in an acceptable inversion error ( e.g., {˜} 40 G instead of 50 G in field strength and {˜} 100° instead of 90° in inclination) when FAL-A and FAL-F are swapped. However, the semi-empirical models of FAL-C (averaged quiet-Sun model including both inter-network and network regions) and FAL-P (plage regions) yield an atomic polarization that deviates from all other models, which makes it difficult to precisely determine the magnetic field vector if the correct atmospheric model is not known ( e.g., the inversion error is much larger than 40% of the field strength; {>} 70 G instead of 50 G). These results clearly demonstrate that the choice of model atmosphere is important for Hanle diagnostics. As is well known, one way to constrain the average atmospheric stratification is to measure the center-to-limb variation of the linear polarization signals. The dependence of the center-to-limb variations on the atmospheric model is also presented in this paper.
Gelbrich, Bianca; Frerking, Carolin; Weiss, Sandra; Schwerdt, Sebastian; Stellzig-Eisenhauer, Angelika; Tausche, Eve; Gelbrich, Götz
2015-01-01
Forensic age estimation in living adolescents is based on several methods, e.g. the assessment of skeletal and dental maturation. Combination of several methods is mandatory, since age estimates from a single method are too imprecise due to biological variability. The correlation of the errors of the methods being combined must be known to calculate the precision of combined age estimates. To examine the correlation of the errors of the hand and the third molar method and to demonstrate how to calculate the combined age estimate. Clinical routine radiographs of the hand and dental panoramic images of 383 patients (aged 7.8-19.1 years, 56% female) were assessed. Lack of correlation (r = -0.024, 95% CI = -0.124 to + 0.076, p = 0.64) allows calculating the combined age estimate as the weighted average of the estimates from hand bones and third molars. Combination improved the standard deviations of errors (hand = 0.97, teeth = 1.35 years) to 0.79 years. Uncorrelated errors of the age estimates obtained from both methods allow straightforward determination of the common estimate and its variance. This is also possible when reference data for the hand and the third molar method are established independently from each other, using different samples.
Ballesteros Peña, Sendoa
2013-04-01
To estimate the frequency of therapeutic errors and to evaluate the diagnostic accuracy in the recognition of shockable rhythms by automated external defibrillators. A retrospective descriptive study. Nine basic life support units from Biscay (Spain). Included 201 patients with cardiac arrest, since 2006 to 2011. The study was made of the suitability of treatment (shock or not) after each analysis and medical errors identified. The sensitivity, specificity and predictive values with 95% confidence intervals were then calculated. A total of 811 electrocardiographic rhythm analyses were obtained, of which 120 (14.1%), from 30 patients, corresponded to shockable rhythms. Sensitivity and specificity for appropriate automated external defibrillators management of a shockable rhythm were 85% (95% CI, 77.5% to 90.3%) and 100% (95% CI, 99.4% to 100%), respectively. Positive and negative predictive values were 100% (95% CI, 96.4% to 100%) and 97.5% (95% CI, 96% to 98.4%), respectively. There were 18 (2.2%; 95% CI, 1.3% to 3.5%) errors associated with defibrillator management, all relating to cases of shockable rhythms that were not shocked. One error was operator dependent, 6 were defibrillator dependent (caused by interaction of pacemakers), and 11 were unclassified. Automated external defibrillators have a very high specificity and moderately high sensitivity. There are few operator dependent errors. Implanted pacemakers interfere with defibrillator analyses. Copyright © 2012 Elsevier España, S.L. All rights reserved.
Kurvers, Ralf H J M; de Zoete, Annemarie; Bachman, Shelby L; Algra, Paul R; Ostelo, Raymond
2018-01-01
Diagnosing the causes of low back pain is a challenging task, prone to errors. A novel approach to increase diagnostic accuracy in medical decision making is collective intelligence, which refers to the ability of groups to outperform individual decision makers in solving problems. We investigated whether combining the independent ratings of chiropractors, chiropractic radiologists and medical radiologists can improve diagnostic accuracy when interpreting diagnostic images of the lumbosacral spine. Evaluations were obtained from two previously published studies: study 1 consisted of 13 raters independently rating 300 lumbosacral radiographs; study 2 consisted of 14 raters independently rating 100 lumbosacral magnetic resonance images. In both studies, raters evaluated the presence of "abnormalities", which are indicators of a serious health risk and warrant immediate further examination. We combined independent decisions of raters using a majority rule which takes as final diagnosis the decision of the majority of the group. We compared the performance of the majority rule to the performance of single raters. Our results show that with increasing group size (i.e., increasing the number of independent decisions) both sensitivity and specificity increased in both data-sets, with groups consistently outperforming single raters. These results were found for radiographs and MR image reading alike. Our findings suggest that combining independent ratings can improve the accuracy of lumbosacral diagnostic image reading.
Di Nuovo, Alessandro G; Di Nuovo, Santo; Buono, Serafino
2012-02-01
The estimation of a person's intelligence quotient (IQ) by means of psychometric tests is indispensable in the application of psychological assessment to several fields. When complex tests as the Wechsler scales, which are the most commonly used and universally recognized parameter for the diagnosis of degrees of retardation, are not applicable, it is necessary to use other psycho-diagnostic tools more suited for the subject's specific condition. But to ensure a homogeneous diagnosis it is necessary to reach a common metric, thus, the aim of our work is to build models able to estimate accurately and reliably the Wechsler IQ, starting from different psycho-diagnostic tools. Four different psychometric tests (Leiter international performance scale; coloured progressive matrices test; the mental development scale; psycho educational profile), along with the Wechsler scale, were administered to a group of 40 mentally retarded subjects, with various pathologies, and control persons. The obtained database is used to evaluate Wechsler IQ estimation models starting from the scores obtained in the other tests. Five modelling methods, two statistical and three from machine learning, that belong to the family of artificial neural networks (ANNs) are employed to build the estimator. Several error metrics for estimated IQ and for retardation level classification are defined to compare the performance of the various models with univariate and multivariate analyses. Eight empirical studies show that, after ten-fold cross-validation, best average estimation error is of 3.37 IQ points and mental retardation level classification error of 7.5%. Furthermore our experiments prove the superior performance of ANN methods over statistical regression ones, because in all cases considered ANN models show the lowest estimation error (from 0.12 to 0.9 IQ points) and the lowest classification error (from 2.5% to 10%). Since the estimation performance is better than the confidence interval of Wechsler scales (five IQ points), we consider models built very accurate and reliable and they can be used into help clinical diagnosis. Therefore a computer software based on the results of our work is currently used in a clinical center and empirical trails confirm its validity. Furthermore positive results in our multivariate studies suggest new approaches for clinicians. Copyright © 2011 Elsevier B.V. All rights reserved.
A hybrid framework for quantifying the influence of data in hydrological model calibration
NASA Astrophysics Data System (ADS)
Wright, David P.; Thyer, Mark; Westra, Seth; McInerney, David
2018-06-01
Influence diagnostics aim to identify a small number of influential data points that have a disproportionate impact on the model parameters and/or predictions. The key issues with current influence diagnostic techniques are that the regression-theory approaches do not provide hydrologically relevant influence metrics, while the case-deletion approaches are computationally expensive to calculate. The main objective of this study is to introduce a new two-stage hybrid framework that overcomes these challenges, by delivering hydrologically relevant influence metrics in a computationally efficient manner. Stage one uses computationally efficient regression-theory influence diagnostics to identify the most influential points based on Cook's distance. Stage two then uses case-deletion influence diagnostics to quantify the influence of points using hydrologically relevant metrics. To illustrate the application of the hybrid framework, we conducted three experiments on 11 hydro-climatologically diverse Australian catchments using the GR4J hydrological model. The first experiment investigated how many data points from stage one need to be retained in order to reliably identify those points that have the hightest influence on hydrologically relevant metrics. We found that a choice of 30-50 is suitable for hydrological applications similar to those explored in this study (30 points identified the most influential data 98% of the time and reduced the required recalibrations by 99% for a 10 year calibration period). The second experiment found little evidence of a change in the magnitude of influence with increasing calibration period length from 1, 2, 5 to 10 years. Even for 10 years the impact of influential points can still be high (>30% influence on maximum predicted flows). The third experiment compared the standard least squares (SLS) objective function with the weighted least squares (WLS) objective function on a 10 year calibration period. In two out of three flow metrics there was evidence that SLS, with the assumption of homoscedastic residual error, identified data points with higher influence (largest changes of 40%, 10%, and 44% for the maximum, mean, and low flows, respectively) than WLS, with the assumption of heteroscedastic residual errors (largest changes of 26%, 6%, and 6% for the maximum, mean, and low flows, respectively). The hybrid framework complements existing model diagnostic tools and can be applied to a wide range of hydrological modelling scenarios.
Meyer, Alexandria; Proudfit, Greg Hajcak; Bufferd, Sara J.; Kujawa, Autumn J.; Laptook, Rebecca S.; Torpey, Dana C.; Klein, Daniel N.
2017-01-01
The error-related negativity (ERN) is a negative deflection in the event-related potential (ERP) occurring approximately 50 ms after error commission at fronto-central electrode sites and is thought to reflect the activation of a generic error monitoring system. Several studies have reported an increased ERN in clinically anxious children, and suggest that anxious children are more sensitive to error commission—although the mechanisms underlying this association are not clear. We have previously found that punishing errors results in a larger ERN, an effect that persists after punishment ends. It is possible that learning-related experiences that impact sensitivity to errors may lead to an increased ERN. In particular, punitive parenting might sensitize children to errors and increase their ERN. We tested this possibility in the current study by prospectively examining the relationship between parenting style during early childhood and children’s ERN approximately three years later. Initially, 295 parents and children (approximately 3 years old) participated in a structured observational measure of parenting behavior, and parents completed a self-report measure of parenting style. At a follow-up assessment approximately three years later, the ERN was elicited during a Go/No-Go task, and diagnostic interviews were completed with parents to assess child psychopathology. Results suggested that both observational measures of hostile parenting and self-report measures of authoritarian parenting style uniquely predicted a larger ERN in children 3 years later. We previously reported that children in this sample with anxiety disorders were characterized by an increased ERN. A mediation analysis indicated that ERN magnitude mediated the relationship between harsh parenting and child anxiety disorder. Results suggest that parenting may shape children’s error processing through environmental conditioning and thereby risk for anxiety, although future work is needed to confirm this hypothesis. PMID:25092483
Behavioural and neural basis of anomalous motor learning in children with autism.
Marko, Mollie K; Crocetti, Deana; Hulst, Thomas; Donchin, Opher; Shadmehr, Reza; Mostofsky, Stewart H
2015-03-01
Autism spectrum disorder is a developmental disorder characterized by deficits in social and communication skills and repetitive and stereotyped interests and behaviours. Although not part of the diagnostic criteria, individuals with autism experience a host of motor impairments, potentially due to abnormalities in how they learn motor control throughout development. Here, we used behavioural techniques to quantify motor learning in autism spectrum disorder, and structural brain imaging to investigate the neural basis of that learning in the cerebellum. Twenty children with autism spectrum disorder and 20 typically developing control subjects, aged 8-12, made reaching movements while holding the handle of a robotic manipulandum. In random trials the reach was perturbed, resulting in errors that were sensed through vision and proprioception. The brain learned from these errors and altered the motor commands on the subsequent reach. We measured learning from error as a function of the sensory modality of that error, and found that children with autism spectrum disorder outperformed typically developing children when learning from errors that were sensed through proprioception, but underperformed typically developing children when learning from errors that were sensed through vision. Previous work had shown that this learning depends on the integrity of a region in the anterior cerebellum. Here we found that the anterior cerebellum, extending into lobule VI, and parts of lobule VIII were smaller than normal in children with autism spectrum disorder, with a volume that was predicted by the pattern of learning from visual and proprioceptive errors. We suggest that the abnormal patterns of motor learning in children with autism spectrum disorder, showing an increased sensitivity to proprioceptive error and a decreased sensitivity to visual error, may be associated with abnormalities in the cerebellum. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Eisenberg, Eugene
1965-01-01
Frequent errors in the diagnosis of diabetes insipidus arise from (1) failure to produce an adequate stimulus for release of antidiuretic hormone, and (2) failure to appreciate acute or chronic changes in renal function that may obscure test results. Properly timed determination of body weight, urine volume and serum and urine osmolarity during the course of water deprivation, and comparison of these values with those obtained after administration of exogenous vasopressin, eliminates most diagnostic errors. In four patients who had experienced local and systemic reactions to other exogenous forms of vasopressin, diabetes insipidus was satisfactorily controlled by administration of synthetic lysine-8 vasopressin in nasal spray. A fifth patient was also treated satisfactorily with this preparation. PMID:14290932
Ho, B T; Tsai, M J; Wei, J; Ma, M; Saipetch, P
1996-01-01
A new method of video compression for angiographic images has been developed to achieve high compression ratio (~20:1) while eliminating block artifacts which leads to loss of diagnostic accuracy. This method adopts motion picture experts group's (MPEGs) motion compensated prediction to takes advantage of frame to frame correlation. However, in contrast to MPEG, the error images arising from mismatches in the motion estimation are encoded by discrete wavelet transform (DWT) rather than block discrete cosine transform (DCT). Furthermore, the authors developed a classification scheme which label each block in an image as intra, error, or background type and encode it accordingly. This hybrid coding can significantly improve the compression efficiency in certain eases. This method can be generalized for any dynamic image sequences applications sensitive to block artifacts.
Case-based clinical reasoning in feline medicine: 1: Intuitive and analytical systems.
Canfield, Paul J; Whitehead, Martin L; Johnson, Robert; O'Brien, Carolyn R; Malik, Richard
2016-01-01
This is Article 1 of a three-part series on clinical reasoning that encourages practitioners to explore and understand how they think and make case-based decisions. It is hoped that, in the process, they will learn to trust their intuition but, at the same time, put in place safeguards to diminish the impact of bias and misguided logic on their diagnostic decision-making. This first article discusses the relative merits and shortcomings of System 1 thinking (immediate and unconscious) and System 2 thinking (effortful and analytical). Articles 2 and 3, to appear in the March and May 2016 issues of JFMS, respectively, will examine managing cognitive error, and use of heuristics (mental short cuts) and illness scripts in diagnostic reasoning. © The Author(s) 2016.
Winnowing DNA for Rare Sequences: Highly Specific Sequence and Methylation Based Enrichment
Thompson, Jason D.; Shibahara, Gosuke; Rajan, Sweta; Pel, Joel; Marziali, Andre
2012-01-01
Rare mutations in cell populations are known to be hallmarks of many diseases and cancers. Similarly, differential DNA methylation patterns arise in rare cell populations with diagnostic potential such as fetal cells circulating in maternal blood. Unfortunately, the frequency of alleles with diagnostic potential, relative to wild-type background sequence, is often well below the frequency of errors in currently available methods for sequence analysis, including very high throughput DNA sequencing. We demonstrate a DNA preparation and purification method that through non-linear electrophoretic separation in media containing oligonucleotide probes, achieves 10,000 fold enrichment of target DNA with single nucleotide specificity, and 100 fold enrichment of unmodified methylated DNA differing from the background by the methylation of a single cytosine residue. PMID:22355378
A real-time spectral mapper as an emerging diagnostic technology in biomedical sciences.
Epitropou, George; Kavvadias, Vassilis; Iliou, Dimitris; Stathopoulos, Efstathios; Balas, Costas
2013-01-01
Real time spectral imaging and mapping at video rates can have tremendous impact not only on diagnostic sciences but also on fundamental physiological problems. We report the first real-time spectral mapper based on the combination of snap-shot spectral imaging and spectral estimation algorithms. Performance evaluation revealed that six band imaging combined with the Wiener algorithm provided high estimation accuracy, with error levels lying within the experimental noise. High accuracy is accompanied with much faster, by 3 orders of magnitude, spectral mapping, as compared with scanning spectral systems. This new technology is intended to enable spectral mapping at nearly video rates in all kinds of dynamic bio-optical effects as well as in applications where the target-probe relative position is randomly and fast changing.
3D equilibrium reconstruction with islands
NASA Astrophysics Data System (ADS)
Cianciosa, M.; Hirshman, S. P.; Seal, S. K.; Shafer, M. W.
2018-04-01
This paper presents the development of a 3D equilibrium reconstruction tool and the results of the first-ever reconstruction of an island equilibrium. The SIESTA non-nested equilibrium solver has been coupled to the V3FIT 3D equilibrium reconstruction code. Computed from a coupled VMEC and SIESTA model, synthetic signals are matched to measured signals by finding an optimal set of equilibrium parameters. By using the normalized pressure in place of normalized flux, non-equilibrium quantities needed by diagnostic signals can be efficiently mapped to the equilibrium. The effectiveness of this tool is demonstrated by reconstructing an island equilibrium of a DIII-D inner wall limited L-mode case with an n = 1 error field applied. Flat spots in Thomson and ECE temperature diagnostics show the reconstructed islands have the correct size and phase. ).
Inference on cancer screening exam accuracy using population-level administrative data.
Jiang, H; Brown, P E; Walter, S D
2016-01-15
This paper develops a model for cancer screening and cancer incidence data, accommodating the partially unobserved disease status, clustered data structures, general covariate effects, and dependence between exams. The true unobserved cancer and detection status of screening participants are treated as latent variables, and a Markov Chain Monte Carlo algorithm is used to estimate the Bayesian posterior distributions of the diagnostic error rates and disease prevalence. We show how the Bayesian approach can be used to draw inferences about screening exam properties and disease prevalence while allowing for the possibility of conditional dependence between two exams. The techniques are applied to the estimation of the diagnostic accuracy of mammography and clinical breast examination using data from the Ontario Breast Screening Program in Canada. Copyright © 2015 John Wiley & Sons, Ltd.
Dental Students' Interpretations of Digital Panoramic Radiographs on Completely Edentate Patients.
Kratz, Richard J; Nguyen, Caroline T; Walton, Joanne N; MacDonald, David
2018-03-01
The ability of dental students to interpret digital panoramic radiographs (PANs) of edentulous patients has not been documented. The aim of this retrospective study was to compare the ability of second-year (D2) dental students with that of third- and fourth-year (D3-D4) dental students to interpret and identify positional errors in digital PANs obtained from patients with complete edentulism. A total of 169 digital PANs from edentulous patients were assessed by D2 (n=84) and D3-D4 (n=85) dental students at one Canadian dental school. The correctness of the students' interpretations was determined by comparison to a gold standard established by assessments of the same PANs by two experts (a graduate student in prosthodontics and an oral and maxillofacial radiologist). Data collected were from September 1, 2006, when digital radiography was implemented at the university, to December 31, 2012. Nearly all (95%) of the PANs were acceptable diagnostically despite a high proportion (92%) of positional errors detected. A total of 301 positional errors were identified in the sample. The D2 students identified significantly more (p=0.002) positional errors than the D3-D4 students. There was no significant difference (p=0.059) in the distribution of radiographic interpretation errors between the two student groups when compared to the gold standard. Overall, the category of extragnathic findings had the highest number of false negatives (43) reported. In this study, dental students interpreted digital PANs of edentulous patients satisfactorily, but they were more adept at identifying radiographic findings compared to positional errors. Students should be reminded to examine the entire radiograph thoroughly to ensure extragnathic findings are not missed and to recognize and report patient positional errors.
Generalized Background Error covariance matrix model (GEN_BE v2.0)
NASA Astrophysics Data System (ADS)
Descombes, G.; Auligné, T.; Vandenberghe, F.; Barker, D. M.
2014-07-01
The specification of state background error statistics is a key component of data assimilation since it affects the impact observations will have on the analysis. In the variational data assimilation approach, applied in geophysical sciences, the dimensions of the background error covariance matrix (B) are usually too large to be explicitly determined and B needs to be modeled. Recent efforts to include new variables in the analysis such as cloud parameters and chemical species have required the development of the code to GENerate the Background Errors (GEN_BE) version 2.0 for the Weather Research and Forecasting (WRF) community model to allow for a simpler, flexible, robust, and community-oriented framework that gathers methods used by meteorological operational centers and researchers. We present the advantages of this new design for the data assimilation community by performing benchmarks and showing some of the new features on data assimilation test cases. As data assimilation for clouds remains a challenge, we present a multivariate approach that includes hydrometeors in the control variables and new correlated errors. In addition, the GEN_BE v2.0 code is employed to diagnose error parameter statistics for chemical species, which shows that it is a tool flexible enough to involve new control variables. While the generation of the background errors statistics code has been first developed for atmospheric research, the new version (GEN_BE v2.0) can be easily extended to other domains of science and be chosen as a testbed for diagnostic and new modeling of B. Initially developed for variational data assimilation, the model of the B matrix may be useful for variational ensemble hybrid methods as well.
Souvestre, P A; Landrock, C K; Blaber, A P
2008-08-01
Human factors centered aviation accident analyses report that skill based errors are known to be cause of 80% of all accidents, decision making related errors 30% and perceptual errors 6%1. In-flight decision making error is a long time recognized major avenue leading to incidents and accidents. Through the past three decades, tremendous and costly efforts have been developed to attempt to clarify causation, roles and responsibility as well as to elaborate various preventative and curative countermeasures blending state of the art biomedical, technological advances and psychophysiological training strategies. In-flight related statistics have not been shown significantly changed and a significant number of issues remain not yet resolved. Fine Postural System and its corollary, Postural Deficiency Syndrome (PDS), both defined in the 1980's, are respectively neurophysiological and medical diagnostic models that reflect central neural sensory-motor and cognitive controls regulatory status. They are successfully used in complex neurotraumatology and related rehabilitation for over two decades. Analysis of clinical data taken over a ten-year period from acute and chronic post-traumatic PDS patients shows a strong correlation between symptoms commonly exhibited before, along side, or even after error, and sensory-motor or PDS related symptoms. Examples are given on how PDS related central sensory-motor control dysfunction can be correctly identified and monitored via a neurophysiological ocular-vestibular-postural monitoring system. The data presented provides strong evidence that a specific biomedical assessment methodology can lead to a better understanding of in-flight adaptive neurophysiological, cognitive and perceptual dysfunctional status that could induce in flight-errors. How relevant human factors can be identified and leveraged to maintain optimal performance will be addressed.
Medication errors in the emergency department: a systems approach to minimizing risk.
Peth, Howard A
2003-02-01
Adverse drug events caused by medication errors represent a common cause of patient injury in the practice of medicine. Many medication errors are preventable and hence particularly tragic when they occur, often with serious consequences. The enormous increase in the number of available drugs on the market makes it all but impossible for physicians, nurses, and pharmacists to possess the knowledge base necessary for fail-safe medication practice. Indeed, the greatest single systemic factor associated with medication errors is a deficiency in the knowledge requisite to the safe use of drugs. It is vital that physicians, nurses, and pharmacists have at their immediate disposal up-to-date drug references. Patients presenting for care in EDs are usually unfamiliar to their EPs and nurses, and the unique patient factors affecting medication response and toxicity are obscured. An appropriate history, physical examination, and diagnostic workup will assist EPs, nurses, and pharmacists in selecting the safest and most optimum therapeutic regimen for each patient. EDs deliver care "24/7" and are open when valuable information resources, such as hospital pharmacists and previously treating physicians, may not be available for consultation. A systems approach to the complex problem of medication errors will help emergency clinicians eliminate preventable adverse drug events and achieve a goal of a zero-defects system, in which medication errors are a thing of the past. New developments in information technology and the advent of electronic medical records with computerized physician order entry, ward-based clinical pharmacists, and standardized bar codes promise substantial reductions in the incidence of medication errors and adverse drug events. ED patients expect and deserve nothing less than the safest possible emergency medicine service.
Diagnostic value of 3D time-of-flight MRA in trigeminal neuralgia.
Cai, Jing; Xin, Zhen-Xue; Zhang, Yu-Qiang; Sun, Jie; Lu, Ji-Liang; Xie, Feng
2015-08-01
The aim of this meta-analysis was to evaluate the diagnostic value of 3D time-of-flight magnetic resonance angiography (3D-TOF-MRA) in trigeminal neuralgia (TN). Relevant studies were identified by computerized database searches supplemented by manual search strategies. The studies were included in accordance with stringent inclusion and exclusion criteria. Following a multistep screening process, high quality studies related to the diagnostic value of 3D-TOF-MRA in TN were selected for meta-analysis. Statistical analyses were conducted using Statistical Analysis Software (version 8.2; SAS Institute, Cary, NC, USA) and Meta Disc (version 1.4; Unit of Clinical Biostatistics, Ramon y Cajal Hospital, Madrid, Spain). For the present meta-analysis, we initially retrieved 95 studies from database searches. A total of 13 studies were eventually enrolled containing a combined total of 1084 TN patients. The meta-analysis results demonstrated that the sensitivity and specificity of the diagnostic value of 3D-TOF-MRA in TN were 95% (95% confidence interval [CI] 0.93-0.96) and 77% (95% CI 0.66-0.86), respectively. The pooled positive likelihood ratio and negative likelihood ratio were 2.72 (95% CI 1.81-4.09) and 0.08 (95% CI 0.06-0.12), respectively. The pooled diagnostic odds ratio of 3D-TOF-MRA in TN was 52.92 (95% CI 26.39-106.11), and the corresponding area under the curve in the summary receiver operating characteristic curve based on the 3D-TOF-MRA diagnostic image of observers was 0.9695 (standard error 0.0165). Our results suggest that 3D-TOF-MRA has excellent sensitivity and specificity as a diagnostic tool for TN, and that it can accurately identify neurovascular compression in TN patients. Copyright © 2015 Elsevier Ltd. All rights reserved.
Characterization of identification errors and uses in localization of poor modal correlation
NASA Astrophysics Data System (ADS)
Martin, Guillaume; Balmes, Etienne; Chancelier, Thierry
2017-05-01
While modal identification is a mature subject, very few studies address the characterization of errors associated with components of a mode shape. This is particularly important in test/analysis correlation procedures, where the Modal Assurance Criterion is used to pair modes and to localize at which sensors discrepancies occur. Poor correlation is usually attributed to modeling errors, but clearly identification errors also occur. In particular with 3D Scanning Laser Doppler Vibrometer measurement, many transfer functions are measured. As a result individual validation of each measurement cannot be performed manually in a reasonable time frame and a notable fraction of measurements is expected to be fairly noisy leading to poor identification of the associated mode shape components. The paper first addresses measurements and introduces multiple criteria. The error measures the difference between test and synthesized transfer functions around each resonance and can be used to localize poorly identified modal components. For intermediate error values, diagnostic of the origin of the error is needed. The level evaluates the transfer function amplitude in the vicinity of a given mode and can be used to eliminate sensors with low responses. A Noise Over Signal indicator, product of error and level, is then shown to be relevant to detect poorly excited modes and errors due to modal property shifts between test batches. Finally, a contribution is introduced to evaluate the visibility of a mode in each transfer. Using tests on a drum brake component, these indicators are shown to provide relevant insight into the quality of measurements. In a second part, test/analysis correlation is addressed with a focus on the localization of sources of poor mode shape correlation. The MACCo algorithm, which sorts sensors by the impact of their removal on a MAC computation, is shown to be particularly relevant. Combined with the error it avoids keeping erroneous modal components. Applied after removal of poor modal components, it provides spatial maps of poor correlation, which help localizing mode shape correlation errors and thus prepare the selection of model changes in updating procedures.
Australian children with cleft palate achieve age-appropriate speech by 5 years of age.
Chacon, Antonia; Parkin, Melissa; Broome, Kate; Purcell, Alison
2017-12-01
Children with cleft palate demonstrate atypical speech sound development, which can influence their intelligibility, literacy and learning. There is limited documentation regarding how speech sound errors change over time in cleft palate speech and the effect that these errors have upon mono-versus polysyllabic word production. The objective of this study was to examine the phonetic and phonological speech skills of children with cleft palate at ages 3 and 5. A cross-sectional observational design was used. Eligible participants were aged 3 or 5 years with a repaired cleft palate. The Diagnostic Evaluation of Articulation and Phonology (DEAP) Articulation subtest and a non-standardised list of mono- and polysyllabic words were administered once for each child. The Profile of Phonology (PROPH) was used to analyse each child's speech. N = 51 children with cleft palate participated in the study. Three-year-old children with cleft palate produced significantly more speech errors than their typically-developing peers, but no difference was apparent at 5 years. The 5-year-olds demonstrated greater phonetic and phonological accuracy than the 3-year-old children. Polysyllabic words were more affected by errors than monosyllables in the 3-year-old group only. Children with cleft palate are prone to phonetic and phonological speech errors in their preschool years. Most of these speech errors approximate typically-developing children by 5 years. At 3 years, word shape has an influence upon phonological speech accuracy. Speech pathology intervention is indicated to support the intelligibility of these children from their earliest stages of development. Copyright © 2017 Elsevier B.V. All rights reserved.
Identification of facilitators and barriers to residents' use of a clinical reasoning tool.
DiNardo, Deborah; Tilstra, Sarah; McNeil, Melissa; Follansbee, William; Zimmer, Shanta; Farris, Coreen; Barnato, Amber E
2018-03-28
While there is some experimental evidence to support the use of cognitive forcing strategies to reduce diagnostic error in residents, the potential usability of such strategies in the clinical setting has not been explored. We sought to test the effect of a clinical reasoning tool on diagnostic accuracy and to obtain feedback on its usability and acceptability. We conducted a randomized behavioral experiment testing the effect of this tool on diagnostic accuracy on written cases among post-graduate 3 (PGY-3) residents at a single internal medical residency program in 2014. Residents completed written clinical cases in a proctored setting with and without prompts to use the tool. The tool encouraged reflection on concordant and discordant aspects of each case. We used random effects regression to assess the effect of the tool on diagnostic accuracy of the independent case sets, controlling for case complexity. We then conducted audiotaped structured focus group debriefing sessions and reviewed the tapes for facilitators and barriers to use of the tool. Of 51 eligible PGY-3 residents, 34 (67%) participated in the study. The average diagnostic accuracy increased from 52% to 60% with the tool, a difference that just met the test for statistical significance in adjusted analyses (p=0.05). Residents reported that the tool was generally acceptable and understandable but did not recognize its utility for use with simple cases, suggesting the presence of overconfidence bias. A clinical reasoning tool improved residents' diagnostic accuracy on written cases. Overconfidence bias is a potential barrier to its use in the clinical setting.
Accelerated Compressed Sensing Based CT Image Reconstruction.
Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R; Paul, Narinder S; Cobbold, Richard S C
2015-01-01
In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization.
Accelerated Compressed Sensing Based CT Image Reconstruction
Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R.; Paul, Narinder S.; Cobbold, Richard S. C.
2015-01-01
In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization. PMID:26167200
Measurement of tokamak error fields using plasma response and its applicability to ITER
Strait, Edward J.; Buttery, Richard J.; Casper, T. A.; ...
2014-04-17
The nonlinear response of a low-beta tokamak plasma to non-axisymmetric fields offers an alternative to direct measurement of the non-axisymmetric part of the vacuum magnetic fields, often termed “error fields”. Possible approaches are discussed for determination of error fields and the required current in non-axisymmetric correction coils, with an emphasis on two relatively new methods: measurement of the torque balance on a saturated magnetic island, and measurement of the braking of plasma rotation in the absence of an island. The former is well suited to ohmically heated discharges, while the latter is more appropriate for discharges with a modest amountmore » of neutral beam heating to drive rotation. Both can potentially provide continuous measurements during a discharge, subject to the limitation of a minimum averaging time. The applicability of these methods to ITER is discussed, and an estimate is made of their uncertainties in light of the specifications of ITER’s diagnostic systems. Furthermore, the use of plasma response-based techniques in normal ITER operational scenarios may allow identification of the error field contributions by individual central solenoid coils, but identification of the individual contributions by the outer poloidal field coils or other sources is less likely to be feasible.« less
NASA Astrophysics Data System (ADS)
Hermita, N.; Suhandi, A.; Syaodih, E.; Samsudin, A.; Isjoni; Johan, H.; Rosa, F.; Setyaningsih, R.; Sapriadil; Safitri, D.
2017-09-01
We have already constructed and implemented the diagnostic test formed in the four tier test to diagnose pre-service elementary teachers’ misconceptions about static electricity. The method which is utilized in this study is 3D-1I (Define, Design, Develop and Implementation) conducted to the pre-service elementary school teachers. The number of respondents involved in the study is 78 students of PGSD FKIP Universitas Riau. The data was collected by administering diagnostic test items in the form of four tier test. The result indicates that there are several misconceptions related to static electricity concept, these include: 1) Electrostatic objects cannot attract neutral objects, 2) A neutral object is an object that does not contain an electrical charge, and 3) the magnitude of the tensile force between two charged objects depends on the size of the charge. Moreover, the research’s results establish that the diagnostic test is able to analyse number of misconceptions and classify level of understanding pre-service elementary school teachers that is scientific knowledge, misconception, lack knowledge, and error. In conclusion, the diagnostic test item in the form of four tier test has already been constructed and implemented to diagnose students’ conceptions on static electricity.
Colonoscopy video quality assessment using hidden Markov random fields
NASA Astrophysics Data System (ADS)
Park, Sun Young; Sargent, Dusty; Spofford, Inbar; Vosburgh, Kirby
2011-03-01
With colonoscopy becoming a common procedure for individuals aged 50 or more who are at risk of developing colorectal cancer (CRC), colon video data is being accumulated at an ever increasing rate. However, the clinically valuable information contained in these videos is not being maximally exploited to improve patient care and accelerate the development of new screening methods. One of the well-known difficulties in colonoscopy video analysis is the abundance of frames with no diagnostic information. Approximately 40% - 50% of the frames in a colonoscopy video are contaminated by noise, acquisition errors, glare, blur, and uneven illumination. Therefore, filtering out low quality frames containing no diagnostic information can significantly improve the efficiency of colonoscopy video analysis. To address this challenge, we present a quality assessment algorithm to detect and remove low quality, uninformative frames. The goal of our algorithm is to discard low quality frames while retaining all diagnostically relevant information. Our algorithm is based on a hidden Markov model (HMM) in combination with two measures of data quality to filter out uninformative frames. Furthermore, we present a two-level framework based on an embedded hidden Markov model (EHHM) to incorporate the proposed quality assessment algorithm into a complete, automated diagnostic image analysis system for colonoscopy video.
Borgia, G C; Brown, R J; Fantazzini, P
2000-12-01
The basic method of UPEN (uniform penalty inversion of multiexponential decay data) is given in an earlier publication (Borgia et al., J. Magn. Reson. 132, 65-77 (1998)), which also discusses the effects of noise, constraints, and smoothing on the resolution or apparent resolution of features of a computed distribution of relaxation times. UPEN applies negative feedback to a regularization penalty, allowing stronger smoothing for a broad feature than for a sharp line. This avoids unnecessarily broadening the sharp line and/or breaking the wide peak or tail into several peaks that the relaxation data do not demand to be separate. The experimental and artificial data presented earlier were T(1) data, and all had fixed data spacings, uniform in log-time. However, for T(2) data, usually spaced uniformly in linear time, or for data spaced in any manner, we have found that the data spacing does not enter explicitly into the computation. The present work shows the extension of UPEN to T(2) data, including the averaging of data in windows and the use of the corresponding weighting factors in the computation. Measures are implemented to control portions of computed distributions extending beyond the data range. The input smoothing parameters in UPEN are normally fixed, rather than data dependent. A major problem arises, especially at high signal-to-noise ratios, when UPEN is applied to data sets with systematic errors due to instrumental nonidealities or adjustment problems. For instance, a relaxation curve for a wide line can be narrowed by an artificial downward bending of the relaxation curve. Diagnostic parameters are generated to help identify data problems, and the diagnostics are applied in several examples, with particular attention to the meaningful resolution of two closely spaced peaks in a distribution of relaxation times. Where feasible, processing with UPEN in nearly real time should help identify data problems while further instrument adjustments can still be made. The need for the nonnegative constraint is greatly reduced in UPEN, and preliminary processing without this constraint helps identify data sets for which application of the nonnegative constraint is too expensive in terms of error of fit for the data set to represent sums of decaying positive exponentials plus random noise. Copyright 2000 Academic Press.
Yang, Chao-Bo; He, Ping; Escofet-Martin, David; Peng, Jiang-Bo; Fan, Rong-Wei; Yu, Xin; Dunn-Rankin, Derek
2018-01-10
In this paper, three ultrashort-pulse coherent anti-Stokes Raman scattering (CARS) thermometry approaches are summarized with a theoretical time-domain model. The difference between the approaches can be attributed to variations in the input field characteristics of the time-domain model. That is, all three approaches of ultrashort-pulse (CARS) thermometry can be simulated with the unified model by only changing the input fields features. As a specific example, the hybrid femtosecond/picosecond CARS is assessed for its use in combustion flow diagnostics; thus, the examination of the input field has an impact on thermometry focuses on vibrational hybrid femtosecond/picosecond CARS. Beginning with the general model of ultrashort-pulse CARS, the spectra with different input field parameters are simulated. To analyze the temperature measurement error brought by the input field impacts, the spectra are fitted and compared to fits, with the model neglecting the influence introduced by the input fields. The results demonstrate that, however the input pulses are depicted, temperature errors still would be introduced during an experiment. With proper field characterization, however, the significance of the error can be reduced.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, A. L.; Feldman, D. R.; Freidenreich, S.
A new paradigm in benchmark absorption-scattering radiative transfer is presented that enables both the globally averaged and spatially resolved testing of climate model radiation parameterizations in order to uncover persistent sources of biases in the aerosol instantaneous radiative effect (IRE). A proof of concept is demonstrated with the Geophysical Fluid Dynamics Laboratory AM4 and Community Earth System Model 1.2.2 climate models. Instead of prescribing atmospheric conditions and aerosols, as in prior intercomparisons, native snapshots of the atmospheric state and aerosol optical properties from the participating models are used as inputs to an accurate radiation solver to uncover model-relevant biases. Thesemore » diagnostic results show that the models' aerosol IRE bias is of the same magnitude as the persistent range cited (~1 W/m 2) and also varies spatially and with intrinsic aerosol optical properties. The findings presented here underscore the significance of native model error analysis and its dispositive ability to diagnose global biases, confirming its fundamental value for the Radiative Forcing Model Intercomparison Project.« less
Jones, A. L.; Feldman, D. R.; Freidenreich, S.; ...
2017-12-07
A new paradigm in benchmark absorption-scattering radiative transfer is presented that enables both the globally averaged and spatially resolved testing of climate model radiation parameterizations in order to uncover persistent sources of biases in the aerosol instantaneous radiative effect (IRE). A proof of concept is demonstrated with the Geophysical Fluid Dynamics Laboratory AM4 and Community Earth System Model 1.2.2 climate models. Instead of prescribing atmospheric conditions and aerosols, as in prior intercomparisons, native snapshots of the atmospheric state and aerosol optical properties from the participating models are used as inputs to an accurate radiation solver to uncover model-relevant biases. Thesemore » diagnostic results show that the models' aerosol IRE bias is of the same magnitude as the persistent range cited (~1 W/m 2) and also varies spatially and with intrinsic aerosol optical properties. The findings presented here underscore the significance of native model error analysis and its dispositive ability to diagnose global biases, confirming its fundamental value for the Radiative Forcing Model Intercomparison Project.« less
Coppens-Hofman, Marjolein C.; Terband, Hayo; Snik, Ad F.M.; Maassen, Ben A.M.
2017-01-01
Purpose Adults with intellectual disabilities (ID) often show reduced speech intelligibility, which affects their social interaction skills. This study aims to establish the main predictors of this reduced intelligibility in order to ultimately optimise management. Method Spontaneous speech and picture naming tasks were recorded in 36 adults with mild or moderate ID. Twenty-five naïve listeners rated the intelligibility of the spontaneous speech samples. Performance on the picture-naming task was analysed by means of a phonological error analysis based on expert transcriptions. Results The transcription analyses showed that the phonemic and syllabic inventories of the speakers were complete. However, multiple errors at the phonemic and syllabic level were found. The frequencies of specific types of errors were related to intelligibility and quality ratings. Conclusions The development of the phonemic and syllabic repertoire appears to be completed in adults with mild-to-moderate ID. The charted speech difficulties can be interpreted to indicate speech motor control and planning difficulties. These findings may aid the development of diagnostic tests and speech therapies aimed at improving speech intelligibility in this specific group. PMID:28118637
Pierrillas, Philippe B; Tod, Michel; Amiel, Magali; Chenel, Marylore; Henin, Emilie
2016-09-01
The purpose of this study was to explore the impact of censoring due to animal sacrifice on parameter estimates and tumor volume calculated from two diameters in larger tumors during tumor growth experiments in preclinical studies. The type of measurement error that can be expected was also investigated. Different scenarios were challenged using the stochastic simulation and estimation process. One thousand datasets were simulated under the design of a typical tumor growth study in xenografted mice, and then, eight approaches were used for parameter estimation with the simulated datasets. The distribution of estimates and simulation-based diagnostics were computed for comparison. The different approaches were robust regarding the choice of residual error and gave equivalent results. However, by not considering missing data induced by sacrificing the animal, parameter estimates were biased and led to false inferences in terms of compound potency; the threshold concentration for tumor eradication when ignoring censoring was 581 ng.ml(-1), but the true value was 240 ng.ml(-1).
On the use of inexact, pruned hardware in atmospheric modelling
Düben, Peter D.; Joven, Jaume; Lingamneni, Avinash; McNamara, Hugh; De Micheli, Giovanni; Palem, Krishna V.; Palmer, T. N.
2014-01-01
Inexact hardware design, which advocates trading the accuracy of computations in exchange for significant savings in area, power and/or performance of computing hardware, has received increasing prominence in several error-tolerant application domains, particularly those involving perceptual or statistical end-users. In this paper, we evaluate inexact hardware for its applicability in weather and climate modelling. We expand previous studies on inexact techniques, in particular probabilistic pruning, to floating point arithmetic units and derive several simulated set-ups of pruned hardware with reasonable levels of error for applications in atmospheric modelling. The set-up is tested on the Lorenz ‘96 model, a toy model for atmospheric dynamics, using software emulation for the proposed hardware. The results show that large parts of the computation tolerate the use of pruned hardware blocks without major changes in the quality of short- and long-time diagnostics, such as forecast errors and probability density functions. This could open the door to significant savings in computational cost and to higher resolution simulations with weather and climate models. PMID:24842031
Sepsis in Poland: Why Do We Die?
Rorat, Marta; Jurek, Tomasz
2015-01-01
Objective To investigate the adverse events and potential risk factors in patients who develop sepsis. Subjects and Methods Fifty-five medico-legal opinion forms relating to sepsis cases issued by the Department of Forensic Medicine, Wroclaw, Poland, between 2004 and 2013 were analyzed for medical errors and risk factors for adverse events. Results The most common causes of medical errors were a lack of knowledge in recognition, diagnosis and therapy as well as ignorance of risk. The common risk factors for adverse events were deferral of a diagnostic or therapeutic decision, high-level anxiety of patients or their families about the patient's health and actively seeking for help. The most significant risk factors were communication errors, not enough medical staff, stereotype-based thinking about diseases and providing easy explanations for serious symptoms. Conclusion The most common cause of adverse events related to sepsis in the Polish health-care system was a lack of knowledge about the symptoms, diagnosis and treatment as well as the ignoring of danger. A possible means of improving safety might be through spreading knowledge and creating medical management algorithms for all health-care workers, especially physicians. PMID:25501966
Pattern classifier for health monitoring of helicopter gearboxes
NASA Technical Reports Server (NTRS)
Chin, Hsinyung; Danai, Kourosh; Lewicki, David G.
1993-01-01
The application of a newly developed diagnostic method to a helicopter gearbox is demonstrated. This method is a pattern classifier which uses a multi-valued influence matrix (MVIM) as its diagnostic model. The method benefits from a fast learning algorithm, based on error feedback, that enables it to estimate gearbox health from a small set of measurement-fault data. The MVIM method can also assess the diagnosability of the system and variability of the fault signatures as the basis to improve fault signatures. This method was tested on vibration signals reflecting various faults in an OH-58A main rotor transmission gearbox. The vibration signals were then digitized and processed by a vibration signal analyzer to enhance and extract various features of the vibration data. The parameters obtained from this analyzer were utilized to train and test the performance of the MVIM method in both detection and diagnosis. The results indicate that the MVIM method provided excellent detection results when the full range of faults effects on the measurements were included in training, and it had a correct diagnostic rate of 95 percent when the faults were included in training.
Automation, consolidation, and integration in autoimmune diagnostics.
Tozzoli, Renato; D'Aurizio, Federica; Villalta, Danilo; Bizzaro, Nicola
2015-08-01
Over the past two decades, we have witnessed an extraordinary change in autoimmune diagnostics, characterized by the progressive evolution of analytical technologies, the availability of new tests, and the explosive growth of molecular biology and proteomics. Aside from these huge improvements, organizational changes have also occurred which brought about a more modern vision of the autoimmune laboratory. The introduction of automation (for harmonization of testing, reduction of human error, reduction of handling steps, increase of productivity, decrease of turnaround time, improvement of safety), consolidation (combining different analytical technologies or strategies on one instrument or on one group of connected instruments) and integration (linking analytical instruments or group of instruments with pre- and post-analytical devices) opened a new era in immunodiagnostics. In this article, we review the most important changes that have occurred in autoimmune diagnostics and present some models related to the introduction of automation in the autoimmunology laboratory, such as automated indirect immunofluorescence and changes in the two-step strategy for detection of autoantibodies; automated monoplex immunoassays and reduction of turnaround time; and automated multiplex immunoassays for autoantibody profiling.
An Audio Jack-Based Electrochemical Impedance Spectroscopy Sensor for Point-of-Care Diagnostics.
Jiang, Haowei; Sun, Alex; Venkatesh, A G; Hall, Drew A
2017-02-01
Portable and easy-to-use point-of-care (POC) diagnostic devices hold high promise for dramatically improving public health and wellness. In this paper, we present a mobile health (mHealth) immunoassay platform based on audio jack embedded devices, such as smartphones and laptops, that uses electrochemical impedance spectroscopy (EIS) to detect binding of target biomolecules. Compared to other biomolecular detection tools, this platform is intended to be used as a plug-and-play peripheral that reuses existing hardware in the mobile device and does not require an external battery, thereby improving upon its convenience and portability. Experimental data using a passive circuit network to mimic an electrochemical cell demonstrate that the device performs comparably to laboratory grade instrumentation with 0.3% and 0.5° magnitude and phase error, respectively, over a 17 Hz to 17 kHz frequency range. The measured power consumption is 2.5 mW with a dynamic range of 60 dB. This platform was verified by monitoring the real-time formation of a NeutrAvidin self-assembled monolayer (SAM) on a gold electrode demonstrating the potential for POC diagnostics.
Comprehensive evaluation of the child with intellectual disability or global developmental delays.
Moeschler, John B; Shevell, Michael
2014-09-01
Global developmental delay and intellectual disability are relatively common pediatric conditions. This report describes the recommended clinical genetics diagnostic approach. The report is based on a review of published reports, most consisting of medium to large case series of diagnostic tests used, and the proportion of those that led to a diagnosis in such patients. Chromosome microarray is designated as a first-line test and replaces the standard karyotype and fluorescent in situ hybridization subtelomere tests for the child with intellectual disability of unknown etiology. Fragile X testing remains an important first-line test. The importance of considering testing for inborn errors of metabolism in this population is supported by a recent systematic review of the literature and several case series recently published. The role of brain MRI remains important in certain patients. There is also a discussion of the emerging literature on the use of whole-exome sequencing as a diagnostic test in this population. Finally, the importance of intentional comanagement among families, the medical home, and the clinical genetics specialty clinic is discussed. Copyright © 2014 by the American Academy of Pediatrics.
A two-point diagnostic for the H II galaxy Hubble diagram
NASA Astrophysics Data System (ADS)
Leaf, Kyle; Melia, Fulvio
2018-03-01
A previous analysis of starburst-dominated H II galaxies and H II regions has demonstrated a statistically significant preference for the Friedmann-Robertson-Walker cosmology with zero active mass, known as the Rh = ct universe, over Λcold dark matter (ΛCDM) and its related dark-matter parametrizations. In this paper, we employ a two-point diagnostic with these data to present a complementary statistical comparison of Rh = ct with Planck ΛCDM. Our two-point diagnostic compares, in a pairwise fashion, the difference between the distance modulus measured at two redshifts with that predicted by each cosmology. Our results support the conclusion drawn by a previous comparative analysis demonstrating that Rh = ct is statistically preferred over Planck ΛCDM. But we also find that the reported errors in the H II measurements may not be purely Gaussian, perhaps due to a partial contamination by non-Gaussian systematic effects. The use of H II galaxies and H II regions as standard candles may be improved even further with a better handling of the systematics in these sources.
Single-shot high-resolution characterization of optical pulses by spectral phase diversity
Dorrer, C.; Waxer, L. J.; Kalb, A.; ...
2015-12-15
The concept of spectral phase diversity is proposed and applied to the temporal characterization of optical pulses. The experimental trace is composed of the measured power of a plurality of ancillary optical pulses derived from the pulse under test by adding known amounts of chromatic dispersion. The spectral phase of the pulse under test is retrieved by minimizing the error between the experimental trace and a trace calculated from the optical spectrum using the known diagnostic parameters. An assembly composed of splitters and dispersive delay fibers has been used to generate 64 ancillary pulses whose instantaneous power can be detectedmore » in a single shot with a high-bandwidth photodiode and oscilloscope. Pulse-shape reconstruction for pulses shorter than the photodetection impulse response has been demonstrated.The diagnostic is experimentally shown to accurately characterize pulses from a chirped-pulse–amplification system when its stretcher is detuned from the position for optimal recompression. As a result, various investigations of the performance with respect to the number of ancillary pulses and the range of chromatic dispersion generated in the diagnostic are presented.« less
NASA Astrophysics Data System (ADS)
Unger, Jakob; Sun, Tianchen; Chen, Yi-Ling; Phipps, Jennifer E.; Bold, Richard J.; Darrow, Morgan A.; Ma, Kwan-Liu; Marcu, Laura
2018-01-01
An important step in establishing the diagnostic potential for emerging optical imaging techniques is accurate registration between imaging data and the corresponding tissue histopathology typically used as gold standard in clinical diagnostics. We present a method to precisely register data acquired with a point-scanning spectroscopic imaging technique from fresh surgical tissue specimen blocks with corresponding histological sections. Using a visible aiming beam to augment point-scanning multispectral time-resolved fluorescence spectroscopy on video images, we evaluate two different markers for the registration with histology: fiducial markers using a 405-nm CW laser and the tissue block's outer shape characteristics. We compare the registration performance with benchmark methods using either the fiducial markers or the outer shape characteristics alone to a hybrid method using both feature types. The hybrid method was found to perform best reaching an average error of 0.78±0.67 mm. This method provides a profound framework to validate diagnostical abilities of optical fiber-based techniques and furthermore enables the application of supervised machine learning techniques to automate tissue characterization.
Embedded importance watermarking for image verification in radiology
NASA Astrophysics Data System (ADS)
Osborne, Domininc; Rogers, D.; Sorell, M.; Abbott, Derek
2004-03-01
Digital medical images used in radiology are quite different to everyday continuous tone images. Radiology images require that all detailed diagnostic information can be extracted, which traditionally constrains digital medical images to be of large size and stored without loss of information. In order to transmit diagnostic images over a narrowband wireless communication link for remote diagnosis, lossy compression schemes must be used. This involves discarding detailed information and compressing the data, making it more susceptible to error. The loss of image detail and incidental degradation occurring during transmission have potential legal accountability issues, especially in the case of the null diagnosis of a tumor. The work proposed here investigates techniques for verifying the voracity of medical images - in particular, detailing the use of embedded watermarking as an objective means to ensure that important parts of the medical image can be verified. We propose a result to show how embedded watermarking can be used to differentiate contextual from detailed information. The type of images that will be used include spiral hairline fractures and small tumors, which contain the essential diagnostic high spatial frequency information.
Sliding Mode Fault Tolerant Control with Adaptive Diagnosis for Aircraft Engines
NASA Astrophysics Data System (ADS)
Xiao, Lingfei; Du, Yanbin; Hu, Jixiang; Jiang, Bin
2018-03-01
In this paper, a novel sliding mode fault tolerant control method is presented for aircraft engine systems with uncertainties and disturbances on the basis of adaptive diagnostic observer. By taking both sensors faults and actuators faults into account, the general model of aircraft engine control systems which is subjected to uncertainties and disturbances, is considered. Then, the corresponding augmented dynamic model is established in order to facilitate the fault diagnosis and fault tolerant controller design. Next, a suitable detection observer is designed to detect the faults effectively. Through creating an adaptive diagnostic observer and based on sliding mode strategy, the sliding mode fault tolerant controller is constructed. Robust stabilization is discussed and the closed-loop system can be stabilized robustly. It is also proven that the adaptive diagnostic observer output errors and the estimations of faults converge to a set exponentially, and the converge rate greater than some value which can be adjusted by choosing designable parameters properly. The simulation on a twin-shaft aircraft engine verifies the applicability of the proposed fault tolerant control method.
Single-shot high-resolution characterization of optical pulses by spectral phase diversity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dorrer, C.; Waxer, L. J.; Kalb, A.
The concept of spectral phase diversity is proposed and applied to the temporal characterization of optical pulses. The experimental trace is composed of the measured power of a plurality of ancillary optical pulses derived from the pulse under test by adding known amounts of chromatic dispersion. The spectral phase of the pulse under test is retrieved by minimizing the error between the experimental trace and a trace calculated from the optical spectrum using the known diagnostic parameters. An assembly composed of splitters and dispersive delay fibers has been used to generate 64 ancillary pulses whose instantaneous power can be detectedmore » in a single shot with a high-bandwidth photodiode and oscilloscope. Pulse-shape reconstruction for pulses shorter than the photodetection impulse response has been demonstrated.The diagnostic is experimentally shown to accurately characterize pulses from a chirped-pulse–amplification system when its stretcher is detuned from the position for optimal recompression. As a result, various investigations of the performance with respect to the number of ancillary pulses and the range of chromatic dispersion generated in the diagnostic are presented.« less
Determination of eddy current response with magnetic measurements.
Jiang, Y Z; Tan, Y; Gao, Z; Nakamura, K; Liu, W B; Wang, S Z; Zhong, H; Wang, B B
2017-09-01
Accurate mutual inductances between magnetic diagnostics and poloidal field coils are an essential requirement for determining the poloidal flux for plasma equilibrium reconstruction. The mutual inductance calibration of the flux loops and magnetic probes requires time-varying coil currents, which also simultaneously drive eddy currents in electrically conducting structures. The eddy current-induced field appearing in the magnetic measurements can substantially increase the calibration error in the model if the eddy currents are neglected. In this paper, an expression of the magnetic diagnostic response to the coil currents is used to calibrate the mutual inductances, estimate the conductor time constant, and predict the eddy currents response. It is found that the eddy current effects in magnetic signals can be well-explained by the eddy current response determination. A set of experiments using a specially shaped saddle coil diagnostic are conducted to measure the SUNIST-like eddy current response and to examine the accuracy of this method. In shots that include plasmas, this approach can more accurately determine the plasma-related response in the magnetic signals by eliminating the field due to the eddy currents produced by the external field.