Tully, Mary P; Buchan, Iain E
2009-12-01
To investigate the prevalence of prescribing errors identified by pharmacists in hospital inpatients and the factors influencing error identification rates by pharmacists throughout hospital admission. 880-bed university teaching hospital in North-west England. Data about prescribing errors identified by pharmacists (median: 9 (range 4-17) collecting data per day) when conducting routine work were prospectively recorded on 38 randomly selected days over 18 months. Proportion of new medication orders in which an error was identified; predictors of error identification rate, adjusted for workload and seniority of pharmacist, day of week, type of ward or stage of patient admission. 33,012 new medication orders were reviewed for 5,199 patients; 3,455 errors (in 10.5% of orders) were identified for 2,040 patients (39.2%; median 1, range 1-12). Most were problem orders (1,456, 42.1%) or potentially significant errors (1,748, 50.6%); 197 (5.7%) were potentially serious; 1.6% (n = 54) were potentially severe or fatal. Errors were 41% (CI: 28-56%) more likely to be identified at patient's admission than at other times, independent of confounders. Workload was the strongest predictor of error identification rates, with 40% (33-46%) less errors identified on the busiest days than at other times. Errors identified fell by 1.9% (1.5-2.3%) for every additional chart checked, independent of confounders. Pharmacists routinely identify errors but increasing workload may reduce identification rates. Where resources are limited, they may be better spent on identifying and addressing errors immediately after admission to hospital.
NASA Technical Reports Server (NTRS)
Diorio, Kimberly A.; Voska, Ned (Technical Monitor)
2002-01-01
This viewgraph presentation provides information on Human Factors Process Failure Modes and Effects Analysis (HF PFMEA). HF PFMEA includes the following 10 steps: Describe mission; Define System; Identify human-machine; List human actions; Identify potential errors; Identify factors that effect error; Determine likelihood of error; Determine potential effects of errors; Evaluate risk; Generate solutions (manage error). The presentation also describes how this analysis was applied to a liquid oxygen pump acceptance test.
Using EHR Data to Detect Prescribing Errors in Rapidly Discontinued Medication Orders.
Burlison, Jonathan D; McDaniel, Robert B; Baker, Donald K; Hasan, Murad; Robertson, Jennifer J; Howard, Scott C; Hoffman, James M
2018-01-01
Previous research developed a new method for locating prescribing errors in rapidly discontinued electronic medication orders. Although effective, the prospective design of that research hinders its feasibility for regular use. Our objectives were to assess a method to retrospectively detect prescribing errors, to characterize the identified errors, and to identify potential improvement opportunities. Electronically submitted medication orders from 28 randomly selected days that were discontinued within 120 minutes of submission were reviewed and categorized as most likely errors, nonerrors, or not enough information to determine status. Identified errors were evaluated by amount of time elapsed from original submission to discontinuation, error type, staff position, and potential clinical significance. Pearson's chi-square test was used to compare rates of errors across prescriber types. In all, 147 errors were identified in 305 medication orders. The method was most effective for orders that were discontinued within 90 minutes. Duplicate orders were most common; physicians in training had the highest error rate ( p < 0.001), and 24 errors were potentially clinically significant. None of the errors were voluntarily reported. It is possible to identify prescribing errors in rapidly discontinued medication orders by using retrospective methods that do not require interrupting prescribers to discuss order details. Future research could validate our methods in different clinical settings. Regular use of this measure could help determine the causes of prescribing errors, track performance, and identify and evaluate interventions to improve prescribing systems and processes. Schattauer GmbH Stuttgart.
Human factors process failure modes and effects analysis (HF PFMEA) software tool
NASA Technical Reports Server (NTRS)
Chandler, Faith T. (Inventor); Relvini, Kristine M. (Inventor); Shedd, Nathaneal P. (Inventor); Valentino, William D. (Inventor); Philippart, Monica F. (Inventor); Bessette, Colette I. (Inventor)
2011-01-01
Methods, computer-readable media, and systems for automatically performing Human Factors Process Failure Modes and Effects Analysis for a process are provided. At least one task involved in a process is identified, where the task includes at least one human activity. The human activity is described using at least one verb. A human error potentially resulting from the human activity is automatically identified, the human error is related to the verb used in describing the task. A likelihood of occurrence, detection, and correction of the human error is identified. The severity of the effect of the human error is identified. The likelihood of occurrence, and the severity of the risk of potential harm is identified. The risk of potential harm is compared with a risk threshold to identify the appropriateness of corrective measures.
Structured methods for identifying and correcting potential human errors in aviation operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, W.R.
1997-10-01
Human errors have been identified as the source of approximately 60% of the incidents and accidents that occur in commercial aviation. It can be assumed that a very large number of human errors occur in aviation operations, even though in most cases the redundancies and diversities built into the design of aircraft systems prevent the errors from leading to serious consequences. In addition, when it is acknowledged that many system failures have their roots in human errors that occur in the design phase, it becomes apparent that the identification and elimination of potential human errors could significantly decrease the risksmore » of aviation operations. This will become even more critical during the design of advanced automation-based aircraft systems as well as next-generation systems for air traffic management. Structured methods to identify and correct potential human errors in aviation operations have been developed and are currently undergoing testing at the Idaho National Engineering and Environmental Laboratory (INEEL).« less
Blood transfusion sampling and a greater role for error recovery.
Oldham, Jane
Patient identification errors in pre-transfusion blood sampling ('wrong blood in tube') are a persistent area of risk. These errors can potentially result in life-threatening complications. Current measures to address root causes of incidents and near misses have not resolved this problem and there is a need to look afresh at this issue. PROJECT PURPOSE: This narrative review of the literature is part of a wider system-improvement project designed to explore and seek a better understanding of the factors that contribute to transfusion sampling error as a prerequisite to examining current and potential approaches to error reduction. A broad search of the literature was undertaken to identify themes relating to this phenomenon. KEY DISCOVERIES: Two key themes emerged from the literature. Firstly, despite multi-faceted causes of error, the consistent element is the ever-present potential for human error. Secondly, current focus on error prevention could potentially be augmented with greater attention to error recovery. Exploring ways in which clinical staff taking samples might learn how to better identify their own errors is proposed to add to current safety initiatives.
Paediatric in-patient prescribing errors in Malaysia: a cross-sectional multicentre study.
Khoo, Teik Beng; Tan, Jing Wen; Ng, Hoong Phak; Choo, Chong Ming; Bt Abdul Shukor, Intan Nor Chahaya; Teh, Siao Hean
2017-06-01
Background There is a lack of large comprehensive studies in developing countries on paediatric in-patient prescribing errors in different settings. Objectives To determine the characteristics of in-patient prescribing errors among paediatric patients. Setting General paediatric wards, neonatal intensive care units and paediatric intensive care units in government hospitals in Malaysia. Methods This is a cross-sectional multicentre study involving 17 participating hospitals. Drug charts were reviewed in each ward to identify the prescribing errors. All prescribing errors identified were further assessed for their potential clinical consequences, likely causes and contributing factors. Main outcome measures Incidence, types, potential clinical consequences, causes and contributing factors of the prescribing errors. Results The overall prescribing error rate was 9.2% out of 17,889 prescribed medications. There was no significant difference in the prescribing error rates between different types of hospitals or wards. The use of electronic prescribing had a higher prescribing error rate than manual prescribing (16.9 vs 8.2%, p < 0.05). Twenty eight (1.7%) prescribing errors were deemed to have serious potential clinical consequences and 2 (0.1%) were judged to be potentially fatal. Most of the errors were attributed to human factors, i.e. performance or knowledge deficit. The most common contributing factors were due to lack of supervision or of knowledge. Conclusions Although electronic prescribing may potentially improve safety, it may conversely cause prescribing errors due to suboptimal interfaces and cumbersome work processes. Junior doctors need specific training in paediatric prescribing and close supervision to reduce prescribing errors in paediatric in-patients.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
2016-06-15
Radiation treatment consists of a chain of events influenced by the quality of machine operation, beam data commissioning, machine calibration, patient specific data, simulation, treatment planning, imaging and treatment delivery. There is always a chance that the clinical medical physicist may make or fail to detect an error in one of the events that may impact on the patient’s treatment. In the clinical scenario, errors may be systematic and, without peer review, may have a low detectability because they are not part of routine QA procedures. During treatment, there might be errors on machine that needs attention. External reviews ofmore » some of the treatment delivery components by independent reviewers, like IROC, can detect errors, but may not be timely. The goal of this session is to help junior clinical physicists identify potential errors as well as the approach of quality assurance to perform a root cause analysis to find and eliminate an error and to continually monitor for errors. A compilation of potential errors will be presented by examples of the thought process required to spot the error and determine the root cause. Examples may include unusual machine operation, erratic electrometer reading, consistent lower electron output, variation in photon output, body parts inadvertently left in beam, unusual treatment plan, poor normalization, hot spots etc. Awareness of the possibility and detection of error in any link of the treatment process chain will help improve the safe and accurate delivery of radiation to patients. Four experts will discuss how to identify errors in four areas of clinical treatment. D. Followill, NIH grant CA 180803.« less
TH-B-BRC-01: How to Identify and Resolve Potential Clinical Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Das, I.
2016-06-15
Radiation treatment consists of a chain of events influenced by the quality of machine operation, beam data commissioning, machine calibration, patient specific data, simulation, treatment planning, imaging and treatment delivery. There is always a chance that the clinical medical physicist may make or fail to detect an error in one of the events that may impact on the patient’s treatment. In the clinical scenario, errors may be systematic and, without peer review, may have a low detectability because they are not part of routine QA procedures. During treatment, there might be errors on machine that needs attention. External reviews ofmore » some of the treatment delivery components by independent reviewers, like IROC, can detect errors, but may not be timely. The goal of this session is to help junior clinical physicists identify potential errors as well as the approach of quality assurance to perform a root cause analysis to find and eliminate an error and to continually monitor for errors. A compilation of potential errors will be presented by examples of the thought process required to spot the error and determine the root cause. Examples may include unusual machine operation, erratic electrometer reading, consistent lower electron output, variation in photon output, body parts inadvertently left in beam, unusual treatment plan, poor normalization, hot spots etc. Awareness of the possibility and detection of error in any link of the treatment process chain will help improve the safe and accurate delivery of radiation to patients. Four experts will discuss how to identify errors in four areas of clinical treatment. D. Followill, NIH grant CA 180803.« less
Acheampong, Franklin; Tetteh, Ashalley Raymond; Anto, Berko Panyin
2016-12-01
This study determined the incidence, types, clinical significance, and potential causes of medication administration errors (MAEs) at the emergency department (ED) of a tertiary health care facility in Ghana. This study used a cross-sectional nonparticipant observational technique. Study participants (nurses) were observed preparing and administering medication at the ED of a 2000-bed tertiary care hospital in Accra, Ghana. The observations were then compared with patients' medication charts, and identified errors were clarified with staff for possible causes. Of the 1332 observations made, involving 338 patients and 49 nurses, 362 had errors, representing 27.2%. However, the error rate excluding "lack of drug availability" fell to 12.8%. Without wrong time error, the error rate was 22.8%. The 2 most frequent error types were omission (n = 281, 77.6%) and wrong time (n = 58, 16%) errors. Omission error was mainly due to unavailability of medicine, 48.9% (n = 177). Although only one of the errors was potentially fatal, 26.7% were definitely clinically severe. The common themes that dominated the probable causes of MAEs were unavailability, staff factors, patient factors, prescription, and communication problems. This study gives credence to similar studies in different settings that MAEs occur frequently in the ED of hospitals. Most of the errors identified were not potentially fatal; however, preventive strategies need to be used to make life-saving processes such as drug administration in such specialized units error-free.
Medication Errors in Vietnamese Hospitals: Prevalence, Potential Outcome and Associated Factors
Nguyen, Huong-Thao; Nguyen, Tuan-Dung; van den Heuvel, Edwin R.; Haaijer-Ruskamp, Flora M.; Taxis, Katja
2015-01-01
Background Evidence from developed countries showed that medication errors are common and harmful. Little is known about medication errors in resource-restricted settings, including Vietnam. Objectives To determine the prevalence and potential clinical outcome of medication preparation and administration errors, and to identify factors associated with errors. Methods This was a prospective study conducted on six wards in two urban public hospitals in Vietnam. Data of preparation and administration errors of oral and intravenous medications was collected by direct observation, 12 hours per day on 7 consecutive days, on each ward. Multivariable logistic regression was applied to identify factors contributing to errors. Results In total, 2060 out of 5271 doses had at least one error. The error rate was 39.1% (95% confidence interval 37.8%- 40.4%). Experts judged potential clinical outcomes as minor, moderate, and severe in 72 (1.4%), 1806 (34.2%) and 182 (3.5%) doses. Factors associated with errors were drug characteristics (administration route, complexity of preparation, drug class; all p values < 0.001), and administration time (drug round, p = 0.023; day of the week, p = 0.024). Several interactions between these factors were also significant. Nurse experience was not significant. Higher error rates were observed for intravenous medications involving complex preparation procedures and for anti-infective drugs. Slightly lower medication error rates were observed during afternoon rounds compared to other rounds. Conclusions Potentially clinically relevant errors occurred in more than a third of all medications in this large study conducted in a resource-restricted setting. Educational interventions, focusing on intravenous medications with complex preparation procedure, particularly antibiotics, are likely to improve patient safety. PMID:26383873
Analyzing Software Requirements Errors in Safety-Critical, Embedded Systems
NASA Technical Reports Server (NTRS)
Lutz, Robyn R.
1993-01-01
This paper analyzes the root causes of safety-related software errors in safety-critical, embedded systems. The results show that software errors identified as potentially hazardous to the system tend to be produced by different error mechanisms than non- safety-related software errors. Safety-related software errors are shown to arise most commonly from (1) discrepancies between the documented requirements specifications and the requirements needed for correct functioning of the system and (2) misunderstandings of the software's interface with the rest of the system. The paper uses these results to identify methods by which requirements errors can be prevented. The goal is to reduce safety-related software errors and to enhance the safety of complex, embedded systems.
Kupas, Douglas F; Shayhorn, Meghan A; Green, Paul; Payton, Thomas F
2012-01-01
Medications are essential to emergency medical services (EMS) agencies when providing lifesaving care, but the EMS environment has challenges related to safe medication storage when compared with a hospital setting. We developed a structured process, based on common pharmacy practices, to review medications carried by EMS agencies to identify situations that may lead to medication error and to determine some best practices that may reduce potential errors and the risk of patient harm. To provide a descriptive account of EMS practices related to carrying and storing medications that have the potential for causing a medication administration error or patient harm. Using a structured process for inspection, an emergency medicine pharmacist and emergency physician(s) reviewed the medication carrying and storage practices of all nine advanced life support ambulance agencies within a five-county EMS region. Each medication carried and stored by the EMS agency was inspected for predetermined and spontaneously observed issues that could lead to medication error. These issues were documented and photographed. Two EMS medical directors reviewed each potential error for the risk of producing patient harm and assigned each to a category of high, moderate, or low risk. Because issues of temperature on EMS medications have been addressed elsewhere, this study concentrated on potential for EMS medication administration errors exclusive of storage temperatures. When reviewing medications carried by the nine EMS agencies, 38 medication safety issues were identified (range 1 to 8 per EMS agency). Of these, 16 were considered to be high risk, 14 moderate risk, and eight low risk for patient harm. Examples of potential issues included carrying expired medications, container-labeling issues, different medications stored in look-alike vials or prefilled syringes in the same compartment, and carrying crystalloid solutions next to solutions premixed with a medication. When reviewing medications stored at the EMS agency stations, eight safety issues were identified (range from 0 to 4 per station), including five moderate-risk and three low-risk issues. No agency had any high-risk medication issues related to storage of medication stock in the station. We observed potential medication safety issues related to how medications are carried and stored at all nine EMS agencies in a five-county region. Understanding these issues may assist EMS agencies in reducing the potential for a medication error and risk of patient harm. More research is needed to determine whether following these suggested best practices for carrying medications on EMS vehicles actually reduces errors in medication administration by EMS providers or decreases patient harm.
Use of modeling to identify vulnerabilities to human error in laparoscopy.
Funk, Kenneth H; Bauer, James D; Doolen, Toni L; Telasha, David; Nicolalde, R Javier; Reeber, Miriam; Yodpijit, Nantakrit; Long, Myra
2010-01-01
This article describes an exercise to investigate the utility of modeling and human factors analysis in understanding surgical processes and their vulnerabilities to medical error. A formal method to identify error vulnerabilities was developed and applied to a test case of Veress needle insertion during closed laparoscopy. A team of 2 surgeons, a medical assistant, and 3 engineers used hierarchical task analysis and Integrated DEFinition language 0 (IDEF0) modeling to create rich models of the processes used in initial port creation. Using terminology from a standardized human performance database, detailed task descriptions were written for 4 tasks executed in the process of inserting the Veress needle. Key terms from the descriptions were used to extract from the database generic errors that could occur. Task descriptions with potential errors were translated back into surgical terminology. Referring to the process models and task descriptions, the team used a modified failure modes and effects analysis (FMEA) to consider each potential error for its probability of occurrence, its consequences if it should occur and be undetected, and its probability of detection. The resulting likely and consequential errors were prioritized for intervention. A literature-based validation study confirmed the significance of the top error vulnerabilities identified using the method. Ongoing work includes design and evaluation of procedures to correct the identified vulnerabilities and improvements to the modeling and vulnerability identification methods. Copyright 2010 AAGL. Published by Elsevier Inc. All rights reserved.
Matsumoto, Shokei; Jung, Kyoungwon; Smith, Alan; Coimbra, Raul
2018-06-23
To establish the preventable and potentially preventable death rates in a mature trauma center and to identify the causes of death and highlight the lessons learned from these cases. We analyzed data from a Level-1 Trauma Center Registry, collected over a 15-year period. Data on demographics, timing of death, and potential errors were collected. Deaths were judged as preventable (PD), potentially preventable (PPD), or non-preventable (NPD), following a strict external peer-review process. During the 15-year period, there were 874 deaths, 15 (1.7%) and 6 (0.7%) of which were considered PPDs and PDs, respectively. Patients in the PD and PPD groups were not sicker and had less severe head injury than those in the NPD group. The time-death distribution differed according to preventability. We identified 21 errors in the PD and PPD groups, but only 61 (7.3%) errors in the NPD group (n = 853). Errors in judgement accounted for the majority and for 90.5% of the PD and PPD group errors. Although the numbers of PDs and PPDs were low, denoting maturity of our trauma center, there are important lessons to be learned about how errors in judgment led to deaths that could have been prevented.
A day in the life of a volunteer incident commander: errors, pressures and mitigating strategies.
Bearman, Christopher; Bremner, Peter A
2013-05-01
To meet an identified gap in the literature this paper investigates the tasks that a volunteer incident commander needs to carry out during an incident, the errors that can be made and the way that errors are managed. In addition, pressure from goal seduction and situation aversion were also examined. Volunteer incident commanders participated in a two-part interview consisting of a critical decision method interview and discussions about a hierarchical task analysis constructed by the authors. A SHERPA analysis was conducted to further identify potential errors. The results identified the key tasks, errors with extreme risk, pressures from strong situations and mitigating strategies for errors and pressures. The errors and pressures provide a basic set of issues that need to be managed by both volunteer incident commanders and fire agencies. The mitigating strategies identified here suggest some ways that this can be done. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Outpatient Prescribing Errors and the Impact of Computerized Prescribing
Gandhi, Tejal K; Weingart, Saul N; Seger, Andrew C; Borus, Joshua; Burdick, Elisabeth; Poon, Eric G; Leape, Lucian L; Bates, David W
2005-01-01
Background Medication errors are common among inpatients and many are preventable with computerized prescribing. Relatively little is known about outpatient prescribing errors or the impact of computerized prescribing in this setting. Objective To assess the rates, types, and severity of outpatient prescribing errors and understand the potential impact of computerized prescribing. Design Prospective cohort study in 4 adult primary care practices in Boston using prescription review, patient survey, and chart review to identify medication errors, potential adverse drug events (ADEs) and preventable ADEs. Participants Outpatients over age 18 who received a prescription from 24 participating physicians. Results We screened 1879 prescriptions from 1202 patients, and completed 661 surveys (response rate 55%). Of the prescriptions, 143 (7.6%; 95% confidence interval (CI) 6.4% to 8.8%) contained a prescribing error. Three errors led to preventable ADEs and 62 (43%; 3% of all prescriptions) had potential for patient injury (potential ADEs); 1 was potentially life-threatening (2%) and 15 were serious (24%). Errors in frequency (n=77, 54%) and dose (n=26, 18%) were common. The rates of medication errors and potential ADEs were not significantly different at basic computerized prescribing sites (4.3% vs 11.0%, P=.31; 2.6% vs 4.0%, P=.16) compared to handwritten sites. Advanced checks (including dose and frequency checking) could have prevented 95% of potential ADEs. Conclusions Prescribing errors occurred in 7.6% of outpatient prescriptions and many could have harmed patients. Basic computerized prescribing systems may not be adequate to reduce errors. More advanced systems with dose and frequency checking are likely needed to prevent potentially harmful errors. PMID:16117752
Westbrook, Johanna I.; Li, Ling; Lehnbom, Elin C.; Baysari, Melissa T.; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O.
2015-01-01
Objectives To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Design Audit of 3291patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as ‘clinically important’. Setting Two major academic teaching hospitals in Sydney, Australia. Main Outcome Measures Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. Results A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6–1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0–253.8), but only 13.0/1000 (95% CI: 3.4–22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4–28.4%) contained ≥1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Conclusions Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. PMID:25583702
Nanji, Karen C; Rothschild, Jeffrey M; Boehne, Jennifer J; Keohane, Carol A; Ash, Joan S; Poon, Eric G
2014-01-01
Electronic prescribing systems have often been promoted as a tool for reducing medication errors and adverse drug events. Recent evidence has revealed that adoption of electronic prescribing systems can lead to unintended consequences such as the introduction of new errors. The purpose of this study is to identify and characterize the unrealized potential and residual consequences of electronic prescribing on pharmacy workflow in an outpatient pharmacy. A multidisciplinary team conducted direct observations of workflow in an independent pharmacy and semi-structured interviews with pharmacy staff members about their perceptions of the unrealized potential and residual consequences of electronic prescribing systems. We used qualitative methods to iteratively analyze text data using a grounded theory approach, and derive a list of major themes and subthemes related to the unrealized potential and residual consequences of electronic prescribing. We identified the following five themes: Communication, workflow disruption, cost, technology, and opportunity for new errors. These contained 26 unique subthemes representing different facets of our observations and the pharmacy staff's perceptions of the unrealized potential and residual consequences of electronic prescribing. We offer targeted solutions to improve electronic prescribing systems by addressing the unrealized potential and residual consequences that we identified. These recommendations may be applied not only to improve staff perceptions of electronic prescribing systems but also to improve the design and/or selection of these systems in order to optimize communication and workflow within pharmacies while minimizing both cost and the potential for the introduction of new errors.
The use of source memory to identify one's own episodic confusion errors.
Smith, S M; Tindell, D R; Pierce, B H; Gilliland, T R; Gerkens, D R
2001-03-01
In 4 category cued recall experiments, participants falsely recalled nonlist common members, a semantic confusion error. Errors were more likely if critical nonlist words were presented on an incidental task, causing source memory failures called episodic confusion errors. Participants could better identify the source of falsely recalled words if they had deeply processed the words on the incidental task. For deep but not shallow processing, participants could reliably include or exclude incidentally shown category members in recall. The illusion that critical items actually appeared on categorized lists was diminished but not eradicated when participants identified episodic confusion errors post hoc among their own recalled responses; participants often believed that critical items had been on both the incidental task and the study list. Improved source monitoring can potentially mitigate episodic (but not semantic) confusion errors.
Clinical review: Medication errors in critical care
Moyen, Eric; Camiré, Eric; Stelfox, Henry Thomas
2008-01-01
Medication errors in critical care are frequent, serious, and predictable. Critically ill patients are prescribed twice as many medications as patients outside of the intensive care unit (ICU) and nearly all will suffer a potentially life-threatening error at some point during their stay. The aim of this article is to provide a basic review of medication errors in the ICU, identify risk factors for medication errors, and suggest strategies to prevent errors and manage their consequences. PMID:18373883
Medication administration errors in nursing homes using an automated medication dispensing system.
van den Bemt, Patricia M L A; Idzinga, Jetske C; Robertz, Hans; Kormelink, Dennis Groot; Pels, Neske
2009-01-01
OBJECTIVE To identify the frequency of medication administration errors as well as their potential risk factors in nursing homes using a distribution robot. DESIGN The study was a prospective, observational study conducted within three nursing homes in the Netherlands caring for 180 individuals. MEASUREMENTS Medication errors were measured using the disguised observation technique. Types of medication errors were described. The correlation between several potential risk factors and the occurrence of medication errors was studied to identify potential causes for the errors. RESULTS In total 2,025 medication administrations to 127 clients were observed. In these administrations 428 errors were observed (21.2%). The most frequently occurring types of errors were use of wrong administration techniques (especially incorrect crushing of medication and not supervising the intake of medication) and wrong time errors (administering the medication at least 1 h early or late).The potential risk factors female gender (odds ratio (OR) 1.39; 95% confidence interval (CI) 1.05-1.83), ATC medication class antibiotics (OR 11.11; 95% CI 2.66-46.50), medication crushed (OR 7.83; 95% CI 5.40-11.36), number of dosages/day/client (OR 1.03; 95% CI 1.01-1.05), nursing home 2 (OR 3.97; 95% CI 2.86-5.50), medication not supplied by distribution robot (OR 2.92; 95% CI 2.04-4.18), time classes "7-10 am" (OR 2.28; 95% CI 1.50-3.47) and "10 am-2 pm" (OR 1.96; 1.18-3.27) and day of the week "Wednesday" (OR 1.46; 95% CI 1.03-2.07) are associated with a higher risk of administration errors. CONCLUSIONS Medication administration in nursing homes is prone to many errors. This study indicates that the handling of the medication after removing it from the robot packaging may contribute to this high error frequency, which may be reduced by training of nurse attendants, by automated clinical decision support and by measures to reduce workload.
Palmer, Katherine A; Shane, Rita; Wu, Cindy N; Bell, Douglas S; Diaz, Frank; Cook-Wiens, Galen; Jackevicius, Cynthia A
2016-01-01
Objective We sought to assess the potential of a widely available source of electronic medication data to prevent medication history errors and resultant inpatient order errors. Methods We used admission medication history (AMH) data from a recent clinical trial that identified 1017 AMH errors and 419 resultant inpatient order errors among 194 hospital admissions of predominantly older adult patients on complex medication regimens. Among the subset of patients for whom we could access current Surescripts electronic pharmacy claims data (SEPCD), two pharmacists independently assessed error severity and our main outcome, which was whether SEPCD (1) was unrelated to the medication error; (2) probably would not have prevented the error; (3) might have prevented the error; or (4) probably would have prevented the error. Results Seventy patients had both AMH errors and current, accessible SEPCD. SEPCD probably would have prevented 110 (35%) of 315 AMH errors and 46 (31%) of 147 resultant inpatient order errors. When we excluded the least severe medication errors, SEPCD probably would have prevented 99 (47%) of 209 AMH errors and 37 (61%) of 61 resultant inpatient order errors. SEPCD probably would have prevented at least one AMH error in 42 (60%) of 70 patients. Conclusion When current SEPCD was available for older adult patients on complex medication regimens, it had substantial potential to prevent AMH errors and resultant inpatient order errors, with greater potential to prevent more severe errors. Further study is needed to measure the benefit of SEPCD in actual use at hospital admission. PMID:26911817
Towards a robust BCI: error potentials and online learning.
Buttfield, Anna; Ferrez, Pierre W; Millán, José del R
2006-06-01
Recent advances in the field of brain-computer interfaces (BCIs) have shown that BCIs have the potential to provide a powerful new channel of communication, completely independent of muscular and nervous systems. However, while there have been successful laboratory demonstrations, there are still issues that need to be addressed before BCIs can be used by nonexperts outside the laboratory. At IDIAP Research Institute, we have been investigating several areas that we believe will allow us to improve the robustness, flexibility, and reliability of BCIs. One area is recognition of cognitive error states, that is, identifying errors through the brain's reaction to mistakes. The production of these error potentials (ErrP) in reaction to an error made by the user is well established. We have extended this work by identifying a similar but distinct ErrP that is generated in response to an error made by the interface, (a misinterpretation of a command that the user has given). This ErrP can be satisfactorily identified in single trials and can be demonstrated to improve the theoretical performance of a BCI. A second area of research is online adaptation of the classifier. BCI signals change over time, both between sessions and within a single session, due to a number of factors. This means that a classifier trained on data from a previous session will probably not be optimal for a new session. In this paper, we present preliminary results from our investigations into supervised online learning that can be applied in the initial training phase. We also discuss the future direction of this research, including the combination of these two currently separate issues to create a potentially very powerful BCI.
Analyzing Software Errors in Safety-Critical Embedded Systems
NASA Technical Reports Server (NTRS)
Lutz, Robyn R.
1994-01-01
This paper analyzes the root causes of safty-related software faults identified as potentially hazardous to the system are distributed somewhat differently over the set of possible error causes than non-safety-related software faults.
Evaluation of a UMLS Auditing Process of Semantic Type Assignments
Gu, Huanying; Hripcsak, George; Chen, Yan; Morrey, C. Paul; Elhanan, Gai; Cimino, James J.; Geller, James; Perl, Yehoshua
2007-01-01
The UMLS is a terminological system that integrates many source terminologies. Each concept in the UMLS is assigned one or more semantic types from the Semantic Network, an upper level ontology for biomedicine. Due to the complexity of the UMLS, errors exist in the semantic type assignments. Finding assignment errors may unearth modeling errors. Even with sophisticated tools, discovering assignment errors requires manual review. In this paper we describe the evaluation of an auditing project of UMLS semantic type assignments. We studied the performance of the auditors who reviewed potential errors. We found that four auditors, interacting according to a multi-step protocol, identified a high rate of errors (one or more errors in 81% of concepts studied) and that results were sufficiently reliable (0.67 to 0.70) for the two most common types of errors. However, reliability was low for each individual auditor, suggesting that review of potential errors is resource-intensive. PMID:18693845
Westbrook, Johanna I; Li, Ling; Lehnbom, Elin C; Baysari, Melissa T; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O
2015-02-01
To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Audit of 3291 patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as 'clinically important'. Two major academic teaching hospitals in Sydney, Australia. Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6-1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0-253.8), but only 13.0/1000 (95% CI: 3.4-22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4-28.4%) contained ≥ 1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. © The Author 2015. Published by Oxford University Press in association with the International Society for Quality in Health Care.
Chana, Narinder; Porat, Talya; Whittlesea, Cate; Delaney, Brendan
2017-03-01
Electronic prescribing has benefited from computerised clinical decision support systems (CDSSs); however, no published studies have evaluated the potential for a CDSS to support GPs in prescribing specialist drugs. To identify potential weaknesses and errors in the existing process of prescribing specialist drugs that could be addressed in the development of a CDSS. Semi-structured interviews with key informants followed by an observational study involving GPs in the UK. Twelve key informants were interviewed to investigate the use of CDSSs in the UK. Nine GPs were observed while performing case scenarios depicting requests from hospitals or patients to prescribe a specialist drug. Activity diagrams, hierarchical task analysis, and systematic human error reduction and prediction approach analyses were performed. The current process of prescribing specialist drugs by GPs is prone to error. Errors of omission due to lack of information were the most common errors, which could potentially result in a GP prescribing a specialist drug that should only be prescribed in hospitals, or prescribing a specialist drug without reference to a shared care protocol. Half of all possible errors in the prescribing process had a high probability of occurrence. A CDSS supporting GPs during the process of prescribing specialist drugs is needed. This could, first, support the decision making of whether or not to undertake prescribing, and, second, provide drug-specific parameters linked to shared care protocols, which could reduce the errors identified and increase patient safety. © British Journal of General Practice 2017.
Using medication list--problem list mismatches as markers of potential error.
Carpenter, James D.; Gorman, Paul N.
2002-01-01
The goal of this project was to specify and develop an algorithm that will check for drug and problem list mismatches in an electronic medical record (EMR). The algorithm is based on the premise that a patient's problem list and medication list should agree, and a mismatch may indicate medication error. Successful development of this algorithm could mean detection of some errors, such as medication orders entered into a wrong patient record, or drug therapy omissions, that are not otherwise detected via automated means. Additionally, mismatches may identify opportunities to improve problem list integrity. To assess the concept's feasibility, this study compared medications listed in a pharmacy information system with findings in an online nursing adult admission assessment, serving as a proxy for the problem list. Where drug and problem list mismatches were discovered, examination of the patient record confirmed the mismatch, and identified any potential causes. Evaluation of the algorithm in diabetes treatment indicates that it successfully detects both potential medication error and opportunities to improve problem list completeness. This algorithm, once fully developed and deployed, could prove a valuable way to improve the patient problem list, and could decrease the risk of medication error. PMID:12463796
Naik, Aanand Dinkar; Rao, Raghuram; Petersen, Laura Ann
2008-01-01
Diagnostic errors are poorly understood despite being a frequent cause of medical errors. Recent efforts have aimed to advance the "basic science" of diagnostic error prevention by tracing errors to their most basic origins. Although a refined theory of diagnostic error prevention will take years to formulate, we focus on communication breakdown, a major contributor to diagnostic errors and an increasingly recognized preventable factor in medical mishaps. We describe a comprehensive framework that integrates the potential sources of communication breakdowns within the diagnostic process and identifies vulnerable steps in the diagnostic process where various types of communication breakdowns can precipitate error. We then discuss potential information technology-based interventions that may have efficacy in preventing one or more forms of these breakdowns. These possible intervention strategies include using new technologies to enhance communication between health providers and health systems, improve patient involvement, and facilitate management of information in the medical record. PMID:18373151
Medication Administration Errors in Nursing Homes Using an Automated Medication Dispensing System
van den Bemt, Patricia M.L.A.; Idzinga, Jetske C.; Robertz, Hans; Kormelink, Dennis Groot; Pels, Neske
2009-01-01
Objective To identify the frequency of medication administration errors as well as their potential risk factors in nursing homes using a distribution robot. Design The study was a prospective, observational study conducted within three nursing homes in the Netherlands caring for 180 individuals. Measurements Medication errors were measured using the disguised observation technique. Types of medication errors were described. The correlation between several potential risk factors and the occurrence of medication errors was studied to identify potential causes for the errors. Results In total 2,025 medication administrations to 127 clients were observed. In these administrations 428 errors were observed (21.2%). The most frequently occurring types of errors were use of wrong administration techniques (especially incorrect crushing of medication and not supervising the intake of medication) and wrong time errors (administering the medication at least 1 h early or late).The potential risk factors female gender (odds ratio (OR) 1.39; 95% confidence interval (CI) 1.05–1.83), ATC medication class antibiotics (OR 11.11; 95% CI 2.66–46.50), medication crushed (OR 7.83; 95% CI 5.40–11.36), number of dosages/day/client (OR 1.03; 95% CI 1.01–1.05), nursing home 2 (OR 3.97; 95% CI 2.86–5.50), medication not supplied by distribution robot (OR 2.92; 95% CI 2.04–4.18), time classes “7–10 am” (OR 2.28; 95% CI 1.50–3.47) and “10 am-2 pm” (OR 1.96; 1.18–3.27) and day of the week “Wednesday” (OR 1.46; 95% CI 1.03–2.07) are associated with a higher risk of administration errors. Conclusions Medication administration in nursing homes is prone to many errors. This study indicates that the handling of the medication after removing it from the robot packaging may contribute to this high error frequency, which may be reduced by training of nurse attendants, by automated clinical decision support and by measures to reduce workload. PMID:19390109
Maskens, Carolyn; Downie, Helen; Wendt, Alison; Lima, Ana; Merkley, Lisa; Lin, Yulia; Callum, Jeannie
2014-01-01
This report provides a comprehensive analysis of transfusion errors occurring at a large teaching hospital and aims to determine key errors that are threatening transfusion safety, despite implementation of safety measures. Errors were prospectively identified from 2005 to 2010. Error data were coded on a secure online database called the Transfusion Error Surveillance System. Errors were defined as any deviation from established standard operating procedures. Errors were identified by clinical and laboratory staff. Denominator data for volume of activity were used to calculate rates. A total of 15,134 errors were reported with a median number of 215 errors per month (range, 85-334). Overall, 9083 (60%) errors occurred on the transfusion service and 6051 (40%) on the clinical services. In total, 23 errors resulted in patient harm: 21 of these errors occurred on the clinical services and two in the transfusion service. Of the 23 harm events, 21 involved inappropriate use of blood. Errors with no harm were 657 times more common than events that caused harm. The most common high-severity clinical errors were sample labeling (37.5%) and inappropriate ordering of blood (28.8%). The most common high-severity error in the transfusion service was sample accepted despite not meeting acceptance criteria (18.3%). The cost of product and component loss due to errors was $593,337. Errors occurred at every point in the transfusion process, with the greatest potential risk of patient harm resulting from inappropriate ordering of blood products and errors in sample labeling. © 2013 American Association of Blood Banks (CME).
A continuous quality improvement project to reduce medication error in the emergency department.
Lee, Sara Bc; Lee, Larry Ly; Yeung, Richard Sd; Chan, Jimmy Ts
2013-01-01
Medication errors are a common source of adverse healthcare incidents particularly in the emergency department (ED) that has a number of factors that make it prone to medication errors. This project aims to reduce medication errors and improve the health and economic outcomes of clinical care in Hong Kong ED. In 2009, a task group was formed to identify problems that potentially endanger medication safety and developed strategies to eliminate these problems. Responsible officers were assigned to look after seven error-prone areas. Strategies were proposed, discussed, endorsed and promulgated to eliminate the problems identified. A reduction of medication incidents (MI) from 16 to 6 was achieved before and after the improvement work. This project successfully established a concrete organizational structure to safeguard error-prone areas of medication safety in a sustainable manner.
Cohen, Trevor; Blatter, Brett; Almeida, Carlos; Patel, Vimla L.
2007-01-01
Objective Contemporary error research suggests that the quest to eradicate error is misguided. Error commission, detection, and recovery are an integral part of cognitive work, even at the expert level. In collaborative workspaces, the perception of potential error is directly observable: workers discuss and respond to perceived violations of accepted practice norms. As perceived violations are captured and corrected preemptively, they do not fit Reason’s widely accepted definition of error as “failure to achieve an intended outcome.” However, perceived violations suggest the aversion of potential error, and consequently have implications for error prevention. This research aims to identify and describe perceived violations of the boundaries of accepted procedure in a psychiatric emergency department (PED), and how they are resolved in practice. Design Clinical discourse from fourteen PED patient rounds was audio-recorded. Excerpts from recordings suggesting perceived violations or incidents of miscommunication were extracted and analyzed using qualitative coding methods. The results are interpreted in relation to prior research on vulnerabilities to error in the PED. Results Thirty incidents of perceived violations or miscommunication are identified and analyzed. Of these, only one medication error was formally reported. Other incidents would not have been detected by a retrospective analysis. Conclusions The analysis of perceived violations expands the data available for error analysis beyond occasional reported adverse events. These data are prospective: responses are captured in real time. This analysis supports a set of recommendations to improve the quality of care in the PED and other critical care contexts. PMID:17329728
Stultz, Jeremy S; Nahata, Milap C
2015-07-01
Information technology (IT) has the potential to prevent medication errors. While many studies have analyzed specific IT technologies and preventable adverse drug events, no studies have identified risk factors for errors still occurring that are not preventable by IT. The objective of this study was to categorize reported or trigger tool-identified errors and adverse events (AEs) at a pediatric tertiary care institution. Also, we sought to identify medication errors preventable by IT, determine why IT-preventable errors occurred, and to identify risk factors for errors that were not preventable by IT. This was a retrospective analysis of voluntarily reported or trigger tool-identified errors and AEs occurring from 1 July 2011 to 30 June 2012. Medication errors reaching the patients were categorized based on the origin, severity, and location of the error, the month in which they occurred, and the age of the patient involved. Error characteristics were included in a multivariable logistic regression model to determine independent risk factors for errors occurring that were not preventable by IT. A medication error was defined as a medication-related failure of a planned action to be completed as intended or the use of a wrong plan to achieve an aim. An IT-preventable error was defined as having an IT system in place to aid in prevention of the error at the phase and location of its origin. There were 936 medication errors (identified by voluntarily reporting or a trigger tool system) included and analyzed. Drug administration errors were identified most frequently (53.4% ), but prescribing errors most frequently caused harm (47.2 % of harmful errors). There were 470 (50.2 %) errors that were IT preventable at their origin, including 155 due to IT system bypasses, 103 due to insensitivity of IT alerting systems, and 47 with IT alert overrides. Dispensing, administration, and documentation errors had higher odds than prescribing errors for being not preventable by IT [odds ratio (OR) 8.0, 95 % CI 4.4-14.6; OR 2.4, 95 % CI 1.7-3.7; and OR 6.7, 95 % CI 3.3-14.5, respectively; all p < 0.001). Errors occurring in the operating room and in the outpatient setting had higher odds than intensive care units for being not preventable by IT (OR 10.4, 95 % CI 4.0-27.2, and OR 2.6, 95 % CI 1.3-5.0, respectively; all p ≤ 0.004). Despite extensive IT implementation at the studied institution, approximately one-half of the medication errors identified by voluntarily reporting or a trigger tool system were not preventable by the utilized IT systems. Inappropriate use of IT systems was a common cause of errors. The identified risk factors represent areas where IT safety features were lacking.
Verifying Parentage and Confirming Identity in Blackberry with a Fingerprinting Set
USDA-ARS?s Scientific Manuscript database
Parentage and identity confirmation is an important aspect of clonally propagated crops outcrossing. Potential errors resulting misidentification include off-type pollination events, labeling errors, or sports of clones. DNA fingerprinting sets are an excellent solution to quickly identify off-type ...
Recommendations to Improve the Accuracy of Estimates of Physical Activity Derived from Self Report
Ainsworth, Barbara E; Caspersen, Carl J; Matthews, Charles E; Mâsse, Louise C; Baranowski, Tom; Zhu, Weimo
2013-01-01
Context Assessment of physical activity using self-report has the potential for measurement error that can lead to incorrect inferences about physical activity behaviors and bias study results. Objective To provide recommendations to improve the accuracy of physical activity derived from self report. Process We provide an overview of presentations and a compilation of perspectives shared by the authors of this paper and workgroup members. Findings We identified a conceptual framework for reducing errors using physical activity self-report questionnaires. The framework identifies six steps to reduce error: (1) identifying the need to measure physical activity, (2) selecting an instrument, (3) collecting data, (4) analyzing data, (5) developing a summary score, and (6) interpreting data. Underlying the first four steps are behavioral parameters of type, intensity, frequency, and duration of physical activities performed, activity domains, and the location where activities are performed. We identified ways to reduce measurement error at each step and made recommendations for practitioners, researchers, and organizational units to reduce error in questionnaire assessment of physical activity. Conclusions Self-report measures of physical activity have a prominent role in research and practice settings. Measurement error can be reduced by applying the framework discussed in this paper. PMID:22287451
Intravenous Chemotherapy Compounding Errors in a Follow-Up Pan-Canadian Observational Study.
Gilbert, Rachel E; Kozak, Melissa C; Dobish, Roxanne B; Bourrier, Venetia C; Koke, Paul M; Kukreti, Vishal; Logan, Heather A; Easty, Anthony C; Trbovich, Patricia L
2018-05-01
Intravenous (IV) compounding safety has garnered recent attention as a result of high-profile incidents, awareness efforts from the safety community, and increasingly stringent practice standards. New research with more-sensitive error detection techniques continues to reinforce that error rates with manual IV compounding are unacceptably high. In 2014, our team published an observational study that described three types of previously unrecognized and potentially catastrophic latent chemotherapy preparation errors in Canadian oncology pharmacies that would otherwise be undetectable. We expand on this research and explore whether additional potential human failures are yet to be addressed by practice standards. Field observations were conducted in four cancer center pharmacies in four Canadian provinces from January 2013 to February 2015. Human factors specialists observed and interviewed pharmacy managers, oncology pharmacists, pharmacy technicians, and pharmacy assistants as they carried out their work. Emphasis was on latent errors (potential human failures) that could lead to outcomes such as wrong drug, dose, or diluent. Given the relatively short observational period, no active failures or actual errors were observed. However, 11 latent errors in chemotherapy compounding were identified. In terms of severity, all 11 errors create the potential for a patient to receive the wrong drug or dose, which in the context of cancer care, could lead to death or permanent loss of function. Three of the 11 practices were observed in our previous study, but eight were new. Applicable Canadian and international standards and guidelines do not explicitly address many of the potentially error-prone practices observed. We observed a significant degree of risk for error in manual mixing practice. These latent errors may exist in other regions where manual compounding of IV chemotherapy takes place. Continued efforts to advance standards, guidelines, technological innovation, and chemical quality testing are needed.
Donn, Steven M; McDonnell, William M
2012-01-01
The Institute of Medicine has recommended a change in culture from "name and blame" to patient safety. This will require system redesign to identify and address errors, establish performance standards, and set safety expectations. This approach, however, is at odds with the present medical malpractice (tort) system. The current system is outcomes-based, meaning that health care providers and institutions are often sued despite providing appropriate care. Nevertheless, the focus should remain to provide the safest patient care. Effective peer review may be hindered by the present tort system. Reporting of medical errors is a key piece of peer review and education, and both anonymous reporting and confidential reporting of errors have potential disadvantages. Diagnostic and treatment errors continue to be the leading sources of allegations of malpractice in pediatrics, and the neonatal intensive care unit is uniquely vulnerable. Most errors result from systems failures rather than human error. Risk management can be an effective process to identify, evaluate, and address problems that may injure patients, lead to malpractice claims, and result in financial losses. Risk management identifies risk or potential risk, calculates the probability of an adverse event arising from a risk, estimates the impact of the adverse event, and attempts to control the risk. Implementation of a successful risk management program requires a positive attitude, sufficient knowledge base, and a commitment to improvement. Transparency in the disclosure of medical errors and a strategy of prospective risk management in dealing with medical errors may result in a substantial reduction in medical malpractice lawsuits, lower litigation costs, and a more safety-conscious environment. Thieme Medical Publishers, Inc.
Errors in Aviation Decision Making: Bad Decisions or Bad Luck?
NASA Technical Reports Server (NTRS)
Orasanu, Judith; Martin, Lynne; Davison, Jeannie; Null, Cynthia H. (Technical Monitor)
1998-01-01
Despite efforts to design systems and procedures to support 'correct' and safe operations in aviation, errors in human judgment still occur and contribute to accidents. In this paper we examine how an NDM (naturalistic decision making) approach might help us to understand the role of decision processes in negative outcomes. Our strategy was to examine a collection of identified decision errors through the lens of an aviation decision process model and to search for common patterns. The second, and more difficult, task was to determine what might account for those patterns. The corpus we analyzed consisted of tactical decision errors identified by the NTSB (National Transportation Safety Board) from a set of accidents in which crew behavior contributed to the accident. A common pattern emerged: about three quarters of the errors represented plan-continuation errors, that is, a decision to continue with the original plan despite cues that suggested changing the course of action. Features in the context that might contribute to these errors were identified: (a) ambiguous dynamic conditions and (b) organizational and socially-induced goal conflicts. We hypothesize that 'errors' are mediated by underestimation of risk and failure to analyze the potential consequences of continuing with the initial plan. Stressors may further contribute to these effects. Suggestions for improving performance in these error-inducing contexts are discussed.
Evaluating mixed samples as a source of error in non-invasive genetic studies using microsatellites
Roon, David A.; Thomas, M.E.; Kendall, K.C.; Waits, L.P.
2005-01-01
The use of noninvasive genetic sampling (NGS) for surveying wild populations is increasing rapidly. Currently, only a limited number of studies have evaluated potential biases associated with NGS. This paper evaluates the potential errors associated with analysing mixed samples drawn from multiple animals. Most NGS studies assume that mixed samples will be identified and removed during the genotyping process. We evaluated this assumption by creating 128 mixed samples of extracted DNA from brown bear (Ursus arctos) hair samples. These mixed samples were genotyped and screened for errors at six microsatellite loci according to protocols consistent with those used in other NGS studies. Five mixed samples produced acceptable genotypes after the first screening. However, all mixed samples produced multiple alleles at one or more loci, amplified as only one of the source samples, or yielded inconsistent electropherograms by the final stage of the error-checking process. These processes could potentially reduce the number of individuals observed in NGS studies, but errors should be conservative within demographic estimates. Researchers should be aware of the potential for mixed samples and carefully design gel analysis criteria and error checking protocols to detect mixed samples.
Bootstrap Estimates of Standard Errors in Generalizability Theory
ERIC Educational Resources Information Center
Tong, Ye; Brennan, Robert L.
2007-01-01
Estimating standard errors of estimated variance components has long been a challenging task in generalizability theory. Researchers have speculated about the potential applicability of the bootstrap for obtaining such estimates, but they have identified problems (especially bias) in using the bootstrap. Using Brennan's bias-correcting procedures…
The District Nursing Clinical Error Reduction Programme.
McGraw, Caroline; Topping, Claire
2011-01-01
The District Nursing Clinical Error Reduction (DANCER) Programme was initiated in NHS Islington following an increase in the number of reported medication errors. The objectives were to reduce the actual degree of harm and the potential risk of harm associated with medication errors and to maintain the existing positive reporting culture, while robustly addressing performance issues. One hundred medication errors reported in 2007/08 were analysed using a framework that specifies the factors that predispose to adverse medication events in domiciliary care. Various contributory factors were identified and interventions were subsequently developed to address poor drug calculation and medication problem-solving skills and incorrectly transcribed medication administration record charts. Follow up data were obtained at 12 months and two years. The evaluation has shown that although medication errors do still occur, the programme has resulted in a marked shift towards a reduction in the associated actual degree of harm and the potential risk of harm.
Double ErrP Detection for Automatic Error Correction in an ERP-Based BCI Speller.
Cruz, Aniana; Pires, Gabriel; Nunes, Urbano J
2018-01-01
Brain-computer interface (BCI) is a useful device for people with severe motor disabilities. However, due to its low speed and low reliability, BCI still has a very limited application in daily real-world tasks. This paper proposes a P300-based BCI speller combined with a double error-related potential (ErrP) detection to automatically correct erroneous decisions. This novel approach introduces a second error detection to infer whether wrong automatic correction also elicits a second ErrP. Thus, two single-trial responses, instead of one, contribute to the final selection, improving the reliability of error detection. Moreover, to increase error detection, the evoked potential detected as target by the P300 classifier is combined with the evoked error potential at a feature-level. Discriminable error and positive potentials (response to correct feedback) were clearly identified. The proposed approach was tested on nine healthy participants and one tetraplegic participant. The online average accuracy for the first and second ErrPs were 88.4% and 84.8%, respectively. With automatic correction, we achieved an improvement around 5% achieving 89.9% in spelling accuracy for an effective 2.92 symbols/min. The proposed approach revealed that double ErrP detection can improve the reliability and speed of BCI systems.
Legal consequences of the moral duty to report errors.
Hall, Jacqulyn Kay
2003-09-01
Increasingly, clinicians are under a moral duty to report errors to the patients who are injured by such errors. The sources of this duty are identified, and its probable impact on malpractice litigation and criminal law is discussed. The potential consequences of enforcing this new moral duty as a minimum in law are noted. One predicted consequence is that the trend will be accelerated toward government payment of compensation for errors. The effect of truth-telling on individuals is discussed.
Identifying the causes of road crashes in Europe
Thomas, Pete; Morris, Andrew; Talbot, Rachel; Fagerlind, Helen
2013-01-01
This research applies a recently developed model of accident causation, developed to investigate industrial accidents, to a specially gathered sample of 997 crashes investigated in-depth in 6 countries. Based on the work of Hollnagel the model considers a collision to be a consequence of a breakdown in the interaction between road users, vehicles and the organisation of the traffic environment. 54% of road users experienced interpretation errors while 44% made observation errors and 37% planning errors. In contrast to other studies only 11% of drivers were identified as distracted and 8% inattentive. There was remarkably little variation in these errors between the main road user types. The application of the model to future in-depth crash studies offers the opportunity to identify new measures to improve safety and to mitigate the social impact of collisions. Examples given include the potential value of co-driver advisory technologies to reduce observation errors and predictive technologies to avoid conflicting interactions between road users. PMID:24406942
Lago, Paola; Bizzarri, Giancarlo; Scalzotto, Francesca; Parpaiola, Antonella; Amigoni, Angela; Putoto, Giovanni; Perilongo, Giorgio
2012-01-01
Objective Administering medication to hospitalised infants and children is a complex process at high risk of error. Failure mode and effect analysis (FMEA) is a proactive tool used to analyse risks, identify failures before they happen and prioritise remedial measures. To examine the hazards associated with the process of drug delivery to children, we performed a proactive risk-assessment analysis. Design and setting Five multidisciplinary teams, representing different divisions of the paediatric department at Padua University Hospital, were trained to analyse the drug-delivery process, to identify possible causes of failures and their potential effects, to calculate a risk priority number (RPN) for each failure and plan changes in practices. Primary outcome To identify higher-priority potential failure modes as defined by RPNs and planning changes in clinical practice to reduce the risk of patients harm and improve safety in the process of medication use in children. Results In all, 37 higher-priority potential failure modes and 71 associated causes and effects were identified. The highest RPNs related (>48) mainly to errors in calculating drug doses and concentrations. Many of these failure modes were found in all the five units, suggesting the presence of common targets for improvement, particularly in enhancing the safety of prescription and preparation of endovenous drugs. The introductions of new activities in the revised process of administering drugs allowed reducing the high-risk failure modes of 60%. Conclusions FMEA is an effective proactive risk-assessment tool useful to aid multidisciplinary groups in understanding a process care and identifying errors that may occur, prioritising remedial interventions and possibly enhancing the safety of drug delivery in children. PMID:23253870
Lago, Paola; Bizzarri, Giancarlo; Scalzotto, Francesca; Parpaiola, Antonella; Amigoni, Angela; Putoto, Giovanni; Perilongo, Giorgio
2012-01-01
Administering medication to hospitalised infants and children is a complex process at high risk of error. Failure mode and effect analysis (FMEA) is a proactive tool used to analyse risks, identify failures before they happen and prioritise remedial measures. To examine the hazards associated with the process of drug delivery to children, we performed a proactive risk-assessment analysis. Five multidisciplinary teams, representing different divisions of the paediatric department at Padua University Hospital, were trained to analyse the drug-delivery process, to identify possible causes of failures and their potential effects, to calculate a risk priority number (RPN) for each failure and plan changes in practices. To identify higher-priority potential failure modes as defined by RPNs and planning changes in clinical practice to reduce the risk of patients harm and improve safety in the process of medication use in children. In all, 37 higher-priority potential failure modes and 71 associated causes and effects were identified. The highest RPNs related (>48) mainly to errors in calculating drug doses and concentrations. Many of these failure modes were found in all the five units, suggesting the presence of common targets for improvement, particularly in enhancing the safety of prescription and preparation of endovenous drugs. The introductions of new activities in the revised process of administering drugs allowed reducing the high-risk failure modes of 60%. FMEA is an effective proactive risk-assessment tool useful to aid multidisciplinary groups in understanding a process care and identifying errors that may occur, prioritising remedial interventions and possibly enhancing the safety of drug delivery in children.
Risk Factors for Increased Severity of Paediatric Medication Administration Errors
Sears, Kim; Goodman, William M.
2012-01-01
Patients' risks from medication errors are widely acknowledged. Yet not all errors, if they occur, have the same risks for severe consequences. Facing resource constraints, policy makers could prioritize factors having the greatest severe–outcome risks. This study assists such prioritization by identifying work-related risk factors most clearly associated with more severe consequences. Data from three Canadian paediatric centres were collected, without identifiers, on actual or potential errors that occurred. Three hundred seventy-two errors were reported, with outcome severities ranging from time delays up to fatalities. Four factors correlated significantly with increased risk for more severe outcomes: insufficient training; overtime; precepting a student; and off-service patient. Factors' impacts on severity also vary with error class: for wrong-time errors, the factors precepting a student or working overtime significantly increase severe-outcomes risk. For other types, caring for an off-service patient has greatest severity risk. To expand such research, better standardization is needed for categorizing outcome severities. PMID:23968607
DOT National Transportation Integrated Search
2001-01-01
The purpose of this study was to examine controller and pilot errors in airport operations to identify potential tower remedies. The : first part of the report contains a review of the literature of studies conducted of tower operationsand of efforts...
Opioid errors in inpatient palliative care services: a retrospective review.
Heneka, Nicole; Shaw, Tim; Rowett, Debra; Lapkin, Samuel; Phillips, Jane L
2018-06-01
Opioids are a high-risk medicine frequently used to manage palliative patients' cancer-related pain and other symptoms. Despite the high volume of opioid use in inpatient palliative care services, and the potential for patient harm, few studies have focused on opioid errors in this population. To (i) identify the number of opioid errors reported by inpatient palliative care services, (ii) identify reported opioid error characteristics and (iii) determine the impact of opioid errors on palliative patient outcomes. A 24-month retrospective review of opioid errors reported in three inpatient palliative care services in one Australian state. Of the 55 opioid errors identified, 84% reached the patient. Most errors involved morphine (35%) or hydromorphone (29%). Opioid administration errors accounted for 76% of reported opioid errors, largely due to omitted dose (33%) or wrong dose (24%) errors. Patients were more likely to receive a lower dose of opioid than ordered as a direct result of an opioid error (57%), with errors adversely impacting pain and/or symptom management in 42% of patients. Half (53%) of the affected patients required additional treatment and/or care as a direct consequence of the opioid error. This retrospective review has provided valuable insights into the patterns and impact of opioid errors in inpatient palliative care services. Iatrogenic harm related to opioid underdosing errors contributed to palliative patients' unrelieved pain. Better understanding the factors that contribute to opioid errors and the role of safety culture in the palliative care service context warrants further investigation. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Morbi, Abigail H M; Hamady, Mohamad S; Riga, Celia V; Kashef, Elika; Pearch, Ben J; Vincent, Charles; Moorthy, Krishna; Vats, Amit; Cheshire, Nicholas J W; Bicknell, Colin D
2012-08-01
To determine the type and frequency of errors during vascular interventional radiology (VIR) and design and implement an intervention to reduce error and improve efficiency in this setting. Ethical guidance was sought from the Research Services Department at Imperial College London. Informed consent was not obtained. Field notes were recorded during 55 VIR procedures by a single observer. Two blinded assessors identified failures from field notes and categorized them into one or more errors by using a 22-part classification system. The potential to cause harm, disruption to procedural flow, and preventability of each failure was determined. A preprocedural team rehearsal (PPTR) was then designed and implemented to target frequent preventable potential failures. Thirty-three procedures were observed subsequently to determine the efficacy of the PPTR. Nonparametric statistical analysis was used to determine the effect of intervention on potential failure rates, potential to cause harm and procedural flow disruption scores (Mann-Whitney U test), and number of preventable failures (Fisher exact test). Before intervention, 1197 potential failures were recorded, of which 54.6% were preventable. A total of 2040 errors were deemed to have occurred to produce these failures. Planning error (19.7%), staff absence (16.2%), equipment unavailability (12.2%), communication error (11.2%), and lack of safety consciousness (6.1%) were the most frequent errors, accounting for 65.4% of the total. After intervention, 352 potential failures were recorded. Classification resulted in 477 errors. Preventable failures decreased from 54.6% to 27.3% (P < .001) with implementation of PPTR. Potential failure rates per hour decreased from 18.8 to 9.2 (P < .001), with no increase in potential to cause harm or procedural flow disruption per failure. Failures during VIR procedures are largely because of ineffective planning, communication error, and equipment difficulties, rather than a result of technical or patient-related issues. Many of these potential failures are preventable. A PPTR is an effective means of targeting frequent preventable failures, reducing procedural delays and improving patient safety.
2010 drug packaging review: identifying problems to prevent errors.
2011-06-01
Prescrire's analyses showed that the quality of drug packaging in 2010 still left much to be desired. Potentially dangerous packaging remains a significant problem: unclear labelling is source of medication errors; dosing devices for some psychotropic drugs create a risk of overdose; child-proof caps are often lacking; and too many patient information leaflets are misleading or difficult to understand. Everything that is needed for safe drug packaging is available; it is now up to regulatory agencies and drug companies to act responsibly. In the meantime, health professionals can help their patients by learning to identify the pitfalls of drug packaging and providing safe information to help prevent medication errors.
Outliers: A Potential Data Problem.
ERIC Educational Resources Information Center
Douzenis, Cordelia; Rakow, Ernest A.
Outliers, extreme data values relative to others in a sample, may distort statistics that assume internal levels of measurement and normal distribution. The outlier may be a valid value or an error. Several procedures are available for identifying outliers, and each may be applied to errors of prediction from the regression lines for utility in a…
Detecting and Characterizing Semantic Inconsistencies in Ported Code
NASA Technical Reports Server (NTRS)
Ray, Baishakhi; Kim, Miryung; Person, Suzette J.; Rungta, Neha
2013-01-01
Adding similar features and bug fixes often requires porting program patches from reference implementations and adapting them to target implementations. Porting errors may result from faulty adaptations or inconsistent updates. This paper investigates (I) the types of porting errors found in practice, and (2) how to detect and characterize potential porting errors. Analyzing version histories, we define five categories of porting errors, including incorrect control- and data-flow, code redundancy, inconsistent identifier renamings, etc. Leveraging this categorization, we design a static control- and data-dependence analysis technique, SPA, to detect and characterize porting inconsistencies. Our evaluation on code from four open-source projects shows thai SPA can dell-oct porting inconsistencies with 65% to 73% precision and 90% recall, and identify inconsistency types with 58% to 63% precision and 92% to 100% recall. In a comparison with two existing error detection tools, SPA improves precision by 14 to 17 percentage points
Detecting and Characterizing Semantic Inconsistencies in Ported Code
NASA Technical Reports Server (NTRS)
Ray, Baishakhi; Kim, Miryung; Person,Suzette; Rungta, Neha
2013-01-01
Adding similar features and bug fixes often requires porting program patches from reference implementations and adapting them to target implementations. Porting errors may result from faulty adaptations or inconsistent updates. This paper investigates (1) the types of porting errors found in practice, and (2) how to detect and characterize potential porting errors. Analyzing version histories, we define five categories of porting errors, including incorrect control- and data-flow, code redundancy, inconsistent identifier renamings, etc. Leveraging this categorization, we design a static control- and data-dependence analysis technique, SPA, to detect and characterize porting inconsistencies. Our evaluation on code from four open-source projects shows that SPA can detect porting inconsistencies with 65% to 73% precision and 90% recall, and identify inconsistency types with 58% to 63% precision and 92% to 100% recall. In a comparison with two existing error detection tools, SPA improves precision by 14 to 17 percentage points.
A guide to evaluating linkage quality for the analysis of linked data.
Harron, Katie L; Doidge, James C; Knight, Hannah E; Gilbert, Ruth E; Goldstein, Harvey; Cromwell, David A; van der Meulen, Jan H
2017-10-01
Linked datasets are an important resource for epidemiological and clinical studies, but linkage error can lead to biased results. For data security reasons, linkage of personal identifiers is often performed by a third party, making it difficult for researchers to assess the quality of the linked dataset in the context of specific research questions. This is compounded by a lack of guidance on how to determine the potential impact of linkage error. We describe how linkage quality can be evaluated and provide widely applicable guidance for both data providers and researchers. Using an illustrative example of a linked dataset of maternal and baby hospital records, we demonstrate three approaches for evaluating linkage quality: applying the linkage algorithm to a subset of gold standard data to quantify linkage error; comparing characteristics of linked and unlinked data to identify potential sources of bias; and evaluating the sensitivity of results to changes in the linkage procedure. These approaches can inform our understanding of the potential impact of linkage error and provide an opportunity to select the most appropriate linkage procedure for a specific analysis. Evaluating linkage quality in this way will improve the quality and transparency of epidemiological and clinical research using linked data. © The Author 2017. Published by Oxford University Press on behalf of the International Epidemiological Association.
Microscopic saw mark analysis: an empirical approach.
Love, Jennifer C; Derrick, Sharon M; Wiersema, Jason M; Peters, Charles
2015-01-01
Microscopic saw mark analysis is a well published and generally accepted qualitative analytical method. However, little research has focused on identifying and mitigating potential sources of error associated with the method. The presented study proposes the use of classification trees and random forest classifiers as an optimal, statistically sound approach to mitigate the potential for error of variability and outcome error in microscopic saw mark analysis. The statistical model was applied to 58 experimental saw marks created with four types of saws. The saw marks were made in fresh human femurs obtained through anatomical gift and were analyzed using a Keyence digital microscope. The statistical approach weighed the variables based on discriminatory value and produced decision trees with an associated outcome error rate of 8.62-17.82%. © 2014 American Academy of Forensic Sciences.
Force Analysis and Energy Operation of Chaotic System of Permanent-Magnet Synchronous Motor
NASA Astrophysics Data System (ADS)
Qi, Guoyuan; Hu, Jianbing
2017-12-01
The disadvantage of a nondimensionalized model of a permanent-magnet synchronous Motor (PMSM) is identified. The original PMSM model is transformed into a Kolmogorov system to aid dynamic force analysis. The vector field of the PMSM is analogous to the force field including four types of torque — inertial, internal, dissipative, and generalized external. Using the feedback thought, the error torque between external torque and dissipative torque is identified. The pitchfork bifurcation of the PMSM is performed. Four forms of energy are identified for the system — kinetic, potential, dissipative, and supplied. The physical interpretations of the decomposition of force and energy exchange are given. Casimir energy is stored energy, and its rate of change is the error power between the dissipative energy and the energy supplied to the motor. Error torque and error power influence the different types of dynamic modes. The Hamiltonian energy and Casimir energy are compared to find the function of each in producing the dynamic modes. A supremum bound for the chaotic attractor is proposed using the error power and Lagrange multiplier.
[Improving blood safety: errors management in transfusion medicine].
Bujandrić, Nevenka; Grujić, Jasmina; Krga-Milanović, Mirjana
2014-01-01
The concept of blood safety includes the entire transfusion chain starting with the collection of blood from the blood donor, and ending with blood transfusion to the patient. The concept involves quality management system as the systematic monitoring of adverse reactions and incidents regarding the blood donor or patient. Monitoring of near-miss errors show the critical points in the working process and increase transfusion safety. The aim of the study was to present the analysis results of adverse and unexpected events in transfusion practice with a potential risk to the health of blood donors and patients. One-year retrospective study was based on the collection, analysis and interpretation of written reports on medical errors in the Blood Transfusion Institute of Vojvodina. Errors were distributed according to the type, frequency and part of the working process where they occurred. Possible causes and corrective actions were described for each error. The study showed that there were not errors with potential health consequences for the blood donor/patient. Errors with potentially damaging consequences for patients were detected throughout the entire transfusion chain. Most of the errors were identified in the preanalytical phase. The human factor was responsible for the largest number of errors. Error reporting system has an important role in the error management and the reduction of transfusion-related risk of adverse events and incidents. The ongoing analysis reveals the strengths and weaknesses of the entire process and indicates the necessary changes. Errors in transfusion medicine can be avoided in a large percentage and prevention is cost-effective, systematic and applicable.
Concomitant prescribing and dispensing errors at a Brazilian hospital: a descriptive study
Silva, Maria das Dores Graciano; Rosa, Mário Borges; Franklin, Bryony Dean; Reis, Adriano Max Moreira; Anchieta, Lêni Márcia; Mota, Joaquim Antônio César
2011-01-01
OBJECTIVE: To analyze the prevalence and types of prescribing and dispensing errors occurring with high-alert medications and to propose preventive measures to avoid errors with these medications. INTRODUCTION: The prevalence of adverse events in health care has increased, and medication errors are probably the most common cause of these events. Pediatric patients are known to be a high-risk group and are an important target in medication error prevention. METHODS: Observers collected data on prescribing and dispensing errors occurring with high-alert medications for pediatric inpatients in a university hospital. In addition to classifying the types of error that occurred, we identified cases of concomitant prescribing and dispensing errors. RESULTS: One or more prescribing errors, totaling 1,632 errors, were found in 632 (89.6%) of the 705 high-alert medications that were prescribed and dispensed. We also identified at least one dispensing error in each high-alert medication dispensed, totaling 1,707 errors. Among these dispensing errors, 723 (42.4%) content errors occurred concomitantly with the prescribing errors. A subset of dispensing errors may have occurred because of poor prescription quality. The observed concomitancy should be examined carefully because improvements in the prescribing process could potentially prevent these problems. CONCLUSION: The system of drug prescribing and dispensing at the hospital investigated in this study should be improved by incorporating the best practices of medication safety and preventing medication errors. High-alert medications may be used as triggers for improving the safety of the drug-utilization system. PMID:22012039
Nurses' role in medication safety.
Choo, Janet; Hutchinson, Alison; Bucknall, Tracey
2010-10-01
To explore the nurse's role in the process of medication management and identify the challenges associated with safe medication management in contemporary clinical practice. Medication errors have been a long-standing factor affecting consumer safety. The nursing profession has been identified as essential to the promotion of patient safety. A review of literature on medication errors and the use of electronic prescribing in medication errors. Medication management requires a multidisciplinary approach and interdisciplinary communication is essential to reduce medication errors. Information technologies can help to reduce some medication errors through eradication of transcription and dosing errors. Nurses must play a major role in the design of computerized medication systems to ensure a smooth transition to such as system. The nurses' roles in medication management cannot be over-emphasized. This is particularly true when designing a computerized medication system. The adoption of safety measures during decision making that parallel those of the aviation industry safety procedures can provide some strategies to prevent medication error. Innovations in information technology offer potential mechanisms to avert adverse events in medication management for nurses. © 2010 The Authors. Journal compilation © 2010 Blackwell Publishing Ltd.
Sources of medical error in refractive surgery.
Moshirfar, Majid; Simpson, Rachel G; Dave, Sonal B; Christiansen, Steven M; Edmonds, Jason N; Culbertson, William W; Pascucci, Stephen E; Sher, Neal A; Cano, David B; Trattler, William B
2013-05-01
To evaluate the causes of laser programming errors in refractive surgery and outcomes in these cases. In this multicenter, retrospective chart review, 22 eyes of 18 patients who had incorrect data entered into the refractive laser computer system at the time of treatment were evaluated. Cases were analyzed to uncover the etiology of these errors, patient follow-up treatments, and final outcomes. The results were used to identify potential methods to avoid similar errors in the future. Every patient experienced compromised uncorrected visual acuity requiring additional intervention, and 7 of 22 eyes (32%) lost corrected distance visual acuity (CDVA) of at least one line. Sixteen patients were suitable candidates for additional surgical correction to address these residual visual symptoms and six were not. Thirteen of 22 eyes (59%) received surgical follow-up treatment; nine eyes were treated with contact lenses. After follow-up treatment, six patients (27%) still had a loss of one line or more of CDVA. Three significant sources of error were identified: errors of cylinder conversion, data entry, and patient identification error. Twenty-seven percent of eyes with laser programming errors ultimately lost one or more lines of CDVA. Patients who underwent surgical revision had better outcomes than those who did not. Many of the mistakes identified were likely avoidable had preventive measures been taken, such as strict adherence to patient verification protocol or rigorous rechecking of treatment parameters. Copyright 2013, SLACK Incorporated.
Bartram, Jack; Mountjoy, Edward; Brooks, Tony; Hancock, Jeremy; Williamson, Helen; Wright, Gary; Moppett, John; Goulden, Nick; Hubank, Mike
2016-07-01
High-throughput sequencing (HTS) (next-generation sequencing) of the rearranged Ig and T-cell receptor genes promises to be less expensive and more sensitive than current methods of monitoring minimal residual disease (MRD) in patients with acute lymphoblastic leukemia. However, the adoption of new approaches by clinical laboratories requires careful evaluation of all potential sources of error and the development of strategies to ensure the highest accuracy. Timely and efficient clinical use of HTS platforms will depend on combining multiple samples (multiplexing) in each sequencing run. Here we examine the Ig heavy-chain gene HTS on the Illumina MiSeq platform for MRD. We identify errors associated with multiplexing that could potentially impact the accuracy of MRD analysis. We optimize a strategy that combines high-purity, sequence-optimized oligonucleotides, dual indexing, and an error-aware demultiplexing approach to minimize errors and maximize sensitivity. We present a probability-based, demultiplexing pipeline Error-Aware Demultiplexer that is suitable for all MiSeq strategies and accurately assigns samples to the correct identifier without excessive loss of data. Finally, using controls quantified by digital PCR, we show that HTS-MRD can accurately detect as few as 1 in 10(6) copies of specific leukemic MRD. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.
Epinephrine Auto-Injector Versus Drawn Up Epinephrine for Anaphylaxis Management: A Scoping Review.
Chime, Nnenna O; Riese, Victoria G; Scherzer, Daniel J; Perretta, Julianne S; McNamara, LeAnn; Rosen, Michael A; Hunt, Elizabeth A
2017-08-01
Anaphylaxis is a life-threatening event. Most clinical symptoms of anaphylaxis can be reversed by prompt intramuscular administration of epinephrine using an auto-injector or epinephrine drawn up in a syringe and delays and errors may be fatal. The aim of this scoping review is to identify and compare errors associated with use of epinephrine drawn up in a syringe versus epinephrine auto-injectors in order to assist hospitals as they choose which approach minimizes risk of adverse events for their patients. PubMed, Embase, CINAHL, Web of Science, and the Cochrane Library were searched using terms agreed to a priori. We reviewed human and simulation studies reporting errors associated with the use of epinephrine in anaphylaxis. There were multiple screening stages with evolving feedback. Each study was independently assessed by two reviewers for eligibility. Data were extracted using an instrument modeled from the Zaza et al instrument and grouped into themes. Three main themes were noted: 1) ergonomics, 2) dosing errors, and 3) errors due to route of administration. Significant knowledge gaps in the operation of epinephrine auto-injectors among healthcare providers, patients, and caregivers were identified. For epinephrine in a syringe, there were more frequent reports of incorrect dosing and erroneous IV administration with associated adverse cardiac events. For the epinephrine auto-injector, unintentional administration to the digit was an error reported on multiple occasions. This scoping review highlights knowledge gaps and a diverse set of errors regardless of the approach to epinephrine preparation during management of anaphylaxis. There are more potentially life-threatening errors reported for epinephrine drawn up in a syringe than with the auto-injectors. The impact of these knowledge gaps and potentially fatal errors on patient outcomes, cost, and quality of care is worthy of further investigation.
Tedja, Milly S; Wojciechowski, Robert; Hysi, Pirro G; Eriksson, Nicholas; Furlotte, Nicholas A; Verhoeven, Virginie J M; Iglesias, Adriana I; Meester-Smoor, Magda A; Tompson, Stuart W; Fan, Qiao; Khawaja, Anthony P; Cheng, Ching-Yu; Höhn, René; Yamashiro, Kenji; Wenocur, Adam; Grazal, Clare; Haller, Toomas; Metspalu, Andres; Wedenoja, Juho; Jonas, Jost B; Wang, Ya Xing; Xie, Jing; Mitchell, Paul; Foster, Paul J; Klein, Barbara E K; Klein, Ronald; Paterson, Andrew D; Hosseini, S Mohsen; Shah, Rupal L; Williams, Cathy; Teo, Yik Ying; Tham, Yih Chung; Gupta, Preeti; Zhao, Wanting; Shi, Yuan; Saw, Woei-Yuh; Tai, E-Shyong; Sim, Xue Ling; Huffman, Jennifer E; Polašek, Ozren; Hayward, Caroline; Bencic, Goran; Rudan, Igor; Wilson, James F; Joshi, Peter K; Tsujikawa, Akitaka; Matsuda, Fumihiko; Whisenhunt, Kristina N; Zeller, Tanja; van der Spek, Peter J; Haak, Roxanna; Meijers-Heijboer, Hanne; van Leeuwen, Elisabeth M; Iyengar, Sudha K; Lass, Jonathan H; Hofman, Albert; Rivadeneira, Fernando; Uitterlinden, André G; Vingerling, Johannes R; Lehtimäki, Terho; Raitakari, Olli T; Biino, Ginevra; Concas, Maria Pina; Schwantes-An, Tae-Hwi; Igo, Robert P; Cuellar-Partida, Gabriel; Martin, Nicholas G; Craig, Jamie E; Gharahkhani, Puya; Williams, Katie M; Nag, Abhishek; Rahi, Jugnoo S; Cumberland, Phillippa M; Delcourt, Cécile; Bellenguez, Céline; Ried, Janina S; Bergen, Arthur A; Meitinger, Thomas; Gieger, Christian; Wong, Tien Yin; Hewitt, Alex W; Mackey, David A; Simpson, Claire L; Pfeiffer, Norbert; Pärssinen, Olavi; Baird, Paul N; Vitart, Veronique; Amin, Najaf; van Duijn, Cornelia M; Bailey-Wilson, Joan E; Young, Terri L; Saw, Seang-Mei; Stambolian, Dwight; MacGregor, Stuart; Guggenheim, Jeremy A; Tung, Joyce Y; Hammond, Christopher J; Klaver, Caroline C W
2018-06-01
Refractive errors, including myopia, are the most frequent eye disorders worldwide and an increasingly common cause of blindness. This genome-wide association meta-analysis in 160,420 participants and replication in 95,505 participants increased the number of established independent signals from 37 to 161 and showed high genetic correlation between Europeans and Asians (>0.78). Expression experiments and comprehensive in silico analyses identified retinal cell physiology and light processing as prominent mechanisms, and also identified functional contributions to refractive-error development in all cell types of the neurosensory retina, retinal pigment epithelium, vascular endothelium and extracellular matrix. Newly identified genes implicate novel mechanisms such as rod-and-cone bipolar synaptic neurotransmission, anterior-segment morphology and angiogenesis. Thirty-one loci resided in or near regions transcribing small RNAs, thus suggesting a role for post-transcriptional regulation. Our results support the notion that refractive errors are caused by a light-dependent retina-to-sclera signaling cascade and delineate potential pathobiological molecular drivers.
The incidence and severity of errors in pharmacist-written discharge medication orders.
Onatade, Raliat; Sawieres, Sara; Veck, Alexandra; Smith, Lindsay; Gore, Shivani; Al-Azeib, Sumiah
2017-08-01
Background Errors in discharge prescriptions are problematic. When hospital pharmacists write discharge prescriptions improvements are seen in the quality and efficiency of discharge. There is limited information on the incidence of errors in pharmacists' medication orders. Objective To investigate the extent and clinical significance of errors in pharmacist-written discharge medication orders. Setting 1000-bed teaching hospital in London, UK. Method Pharmacists in this London hospital routinely write discharge medication orders as part of the clinical pharmacy service. Convenient days, based on researcher availability, between October 2013 and January 2014 were selected. Pre-registration pharmacists reviewed all discharge medication orders written by pharmacists on these days and identified discrepancies between the medication history, inpatient chart, patient records and discharge summary. A senior clinical pharmacist confirmed the presence of an error. Each error was assigned a potential clinical significance rating (based on the NCCMERP scale) by a physician and an independent senior clinical pharmacist, working separately. Main outcome measure Incidence of errors in pharmacist-written discharge medication orders. Results 509 prescriptions, written by 51 pharmacists, containing 4258 discharge medication orders were assessed (8.4 orders per prescription). Ten prescriptions (2%), contained a total of ten erroneous orders (order error rate-0.2%). The pharmacist considered that one error had the potential to cause temporary harm (0.02% of all orders). The physician did not rate any of the errors with the potential to cause harm. Conclusion The incidence of errors in pharmacists' discharge medication orders was low. The quality, safety and policy implications of pharmacists routinely writing discharge medication orders should be further explored.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terezakis, Stephanie A., E-mail: stereza1@jhmi.edu; Harris, Kendra M.; Ford, Eric
Purpose: Systems to ensure patient safety are of critical importance. The electronic incident reporting systems (IRS) of 2 large academic radiation oncology departments were evaluated for events that may be suitable for submission to a national reporting system (NRS). Methods and Materials: All events recorded in the combined IRS were evaluated from 2007 through 2010. Incidents were graded for potential severity using the validated French Nuclear Safety Authority (ASN) 5-point scale. These incidents were categorized into 7 groups: (1) human error, (2) software error, (3) hardware error, (4) error in communication between 2 humans, (5) error at the human-software interface,more » (6) error at the software-hardware interface, and (7) error at the human-hardware interface. Results: Between the 2 systems, 4407 incidents were reported. Of these events, 1507 (34%) were considered to have the potential for clinical consequences. Of these 1507 events, 149 (10%) were rated as having a potential severity of ≥2. Of these 149 events, the committee determined that 79 (53%) of these events would be submittable to a NRS of which the majority was related to human error or to the human-software interface. Conclusions: A significant number of incidents were identified in this analysis. The majority of events in this study were related to human error and to the human-software interface, further supporting the need for a NRS to facilitate field-wide learning and system improvement.« less
Refractive errors and schizophrenia.
Caspi, Asaf; Vishne, Tali; Reichenberg, Abraham; Weiser, Mark; Dishon, Ayelet; Lubin, Gadi; Shmushkevitz, Motti; Mandel, Yossi; Noy, Shlomo; Davidson, Michael
2009-02-01
Refractive errors (myopia, hyperopia and amblyopia), like schizophrenia, have a strong genetic cause, and dopamine has been proposed as a potential mediator in their pathophysiology. The present study explored the association between refractive errors in adolescence and schizophrenia, and the potential familiality of this association. The Israeli Draft Board carries a mandatory standardized visual accuracy assessment. 678,674 males consecutively assessed by the Draft Board and found to be psychiatrically healthy at age 17 were followed for psychiatric hospitalization with schizophrenia using the Israeli National Psychiatric Hospitalization Case Registry. Sib-ships were also identified within the cohort. There was a negative association between refractive errors and later hospitalization for schizophrenia. Future male schizophrenia patients were two times less likely to have refractive errors compared with never-hospitalized individuals, controlling for intelligence, years of education and socioeconomic status [adjusted Hazard Ratio=.55; 95% confidence interval .35-.85]. The non-schizophrenic male siblings of schizophrenia patients also had lower prevalence of refractive errors compared to never-hospitalized individuals. Presence of refractive errors in adolescence is related to lower risk for schizophrenia. The familiality of this association suggests that refractive errors may be associated with the genetic liability to schizophrenia.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watkins, W.T.; Siebers, J.V.; Bzdusek, K.
Purpose: To introduce methods to analyze Deformable Image Registration (DIR) and identify regions of potential DIR errors. Methods: DIR Deformable Vector Fields (DVFs) quantifying patient anatomic changes were evaluated using the Jacobian determinant and the magnitude of DVF curl as functions of tissue density and tissue type. These quantities represent local relative deformation and rotation, respectively. Large values in dense tissues can potentially identify non-physical DVF errors. For multiple DVFs per patient, histograms and visualization of DVF differences were also considered. To demonstrate the capabilities of methods, we computed multiple DVFs for each of five Head and Neck (H'N) patientsmore » (P1–P5) via a Fast-symmetric Demons (FSD) algorithm and via a Diffeomorphic Demons (DFD) algorithm, and show the potential to identify DVF errors. Results: Quantitative comparisons of the FSD and DFD registrations revealed <0.3 cm DVF differences in >99% of all voxels for P1, >96% for P2, and >90% of voxels for P3. While the FSD and DFD registrations were very similar for these patients, the Jacobian determinant was >50% in 9–15% of soft tissue and in 3–17% of bony tissue in each of these cases. The volumes of large soft tissue deformation were consistent for all five patients using the FSD algorithm (mean 15%±4% volume), whereas DFD reduced regions of large deformation by 10% volume (785 cm{sup 3}) for P4 and by 14% volume (1775 cm{sup 3}) for P5. The DFD registrations resulted in fewer regions of large DVF-curl; 50% rotations in FSD registrations averaged 209±136 cm{sup 3} in soft tissue and 10±11 cm{sup 3} in bony tissue, but using DFD these values were reduced to 42±53 cm{sup 3} and 1.1±1.5 cm{sup 3}, respectively. Conclusion: Analysis of Jacobian determinant and curl as functions of tissue density can identify regions of potential DVF errors by identifying non-physical deformations and rotations. Collaboration with Phillips Healthcare, as indicated in authorship.« less
For patients with difficult-to-treat cancers, doctors increasingly rely on genomic testing of tumors to identify errors in the DNA that indicate a tumor can be targeted by existing therapies. But this approach overlooks another potential marker — rogue proteins — that may be driving cancer cells and also could be targeted with existing treatments.
Evaluation of Parenteral Nutrition Errors in an Era of Drug Shortages.
Storey, Michael A; Weber, Robert J; Besco, Kelly; Beatty, Stuart; Aizawa, Kumiko; Mirtallo, Jay M
2016-04-01
Ingredient shortages have forced many organizations to change practices or use unfamiliar ingredients, which creates potential for error. Parenteral nutrition (PN) has been significantly affected, as every ingredient in PN has been impacted in recent years. Ingredient errors involving PN that were reported to the national anonymous MedMARx database between May 2009 and April 2011 were reviewed. Errors were categorized by ingredient, node, and severity. Categorization was validated by experts in medication safety and PN. A timeline of PN ingredient shortages was developed and compared with the PN errors to determine if events correlated with an ingredient shortage. This information was used to determine the prevalence and change in harmful PN errors during periods of shortage, elucidating whether a statistically significant difference exists in errors during shortage as compared with a control period (ie, no shortage). There were 1311 errors identified. Nineteen errors were associated with harm. Fat emulsions and electrolytes were the PN ingredients most frequently associated with error. Insulin was the ingredient most often associated with patient harm. On individual error review, PN shortages were described in 13 errors, most of which were associated with intravenous fat emulsions; none were associated with harm. There was no correlation of drug shortages with the frequency of PN errors. Despite the significant impact that shortages have had on the PN use system, no adverse impact on patient safety could be identified from these reported PN errors. © 2015 American Society for Parenteral and Enteral Nutrition.
Minimizing treatment planning errors in proton therapy using failure mode and effects analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Yuanshui, E-mail: yuanshui.zheng@okc.procure.com; Johnson, Randall; Larson, Gary
Purpose: Failure mode and effects analysis (FMEA) is a widely used tool to evaluate safety or reliability in conventional photon radiation therapy. However, reports about FMEA application in proton therapy are scarce. The purpose of this study is to apply FMEA in safety improvement of proton treatment planning at their center. Methods: The authors performed an FMEA analysis of their proton therapy treatment planning process using uniform scanning proton beams. The authors identified possible failure modes in various planning processes, including image fusion, contouring, beam arrangement, dose calculation, plan export, documents, billing, and so on. For each error, the authorsmore » estimated the frequency of occurrence, the likelihood of being undetected, and the severity of the error if it went undetected and calculated the risk priority number (RPN). The FMEA results were used to design their quality management program. In addition, the authors created a database to track the identified dosimetric errors. Periodically, the authors reevaluated the risk of errors by reviewing the internal error database and improved their quality assurance program as needed. Results: In total, the authors identified over 36 possible treatment planning related failure modes and estimated the associated occurrence, detectability, and severity to calculate the overall risk priority number. Based on the FMEA, the authors implemented various safety improvement procedures into their practice, such as education, peer review, and automatic check tools. The ongoing error tracking database provided realistic data on the frequency of occurrence with which to reevaluate the RPNs for various failure modes. Conclusions: The FMEA technique provides a systematic method for identifying and evaluating potential errors in proton treatment planning before they result in an error in patient dose delivery. The application of FMEA framework and the implementation of an ongoing error tracking system at their clinic have proven to be useful in error reduction in proton treatment planning, thus improving the effectiveness and safety of proton therapy.« less
Minimizing treatment planning errors in proton therapy using failure mode and effects analysis.
Zheng, Yuanshui; Johnson, Randall; Larson, Gary
2016-06-01
Failure mode and effects analysis (FMEA) is a widely used tool to evaluate safety or reliability in conventional photon radiation therapy. However, reports about FMEA application in proton therapy are scarce. The purpose of this study is to apply FMEA in safety improvement of proton treatment planning at their center. The authors performed an FMEA analysis of their proton therapy treatment planning process using uniform scanning proton beams. The authors identified possible failure modes in various planning processes, including image fusion, contouring, beam arrangement, dose calculation, plan export, documents, billing, and so on. For each error, the authors estimated the frequency of occurrence, the likelihood of being undetected, and the severity of the error if it went undetected and calculated the risk priority number (RPN). The FMEA results were used to design their quality management program. In addition, the authors created a database to track the identified dosimetric errors. Periodically, the authors reevaluated the risk of errors by reviewing the internal error database and improved their quality assurance program as needed. In total, the authors identified over 36 possible treatment planning related failure modes and estimated the associated occurrence, detectability, and severity to calculate the overall risk priority number. Based on the FMEA, the authors implemented various safety improvement procedures into their practice, such as education, peer review, and automatic check tools. The ongoing error tracking database provided realistic data on the frequency of occurrence with which to reevaluate the RPNs for various failure modes. The FMEA technique provides a systematic method for identifying and evaluating potential errors in proton treatment planning before they result in an error in patient dose delivery. The application of FMEA framework and the implementation of an ongoing error tracking system at their clinic have proven to be useful in error reduction in proton treatment planning, thus improving the effectiveness and safety of proton therapy.
Understanding human management of automation errors
McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.
2013-01-01
Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042
Understanding human management of automation errors.
McBride, Sara E; Rogers, Wendy A; Fisk, Arthur D
2014-01-01
Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance.
Danielson, Patrick; Yang, Limin; Jin, Suming; Homer, Collin G.; Napton, Darrell
2016-01-01
We developed a method that analyzes the quality of the cultivated cropland class mapped in the USA National Land Cover Database (NLCD) 2006. The method integrates multiple geospatial datasets and a Multi Index Integrated Change Analysis (MIICA) change detection method that captures spectral changes to identify the spatial distribution and magnitude of potential commission and omission errors for the cultivated cropland class in NLCD 2006. The majority of the commission and omission errors in NLCD 2006 are in areas where cultivated cropland is not the most dominant land cover type. The errors are primarily attributed to the less accurate training dataset derived from the National Agricultural Statistics Service Cropland Data Layer dataset. In contrast, error rates are low in areas where cultivated cropland is the dominant land cover. Agreement between model-identified commission errors and independently interpreted reference data was high (79%). Agreement was low (40%) for omission error comparison. The majority of the commission errors in the NLCD 2006 cultivated crops were confused with low-intensity developed classes, while the majority of omission errors were from herbaceous and shrub classes. Some errors were caused by inaccurate land cover change from misclassification in NLCD 2001 and the subsequent land cover post-classification process.
Errors in veterinary practice: preliminary lessons for building better veterinary teams.
Kinnison, T; Guile, D; May, S A
2015-11-14
Case studies in two typical UK veterinary practices were undertaken to explore teamwork, including interprofessional working. Each study involved one week of whole team observation based on practice locations (reception, operating theatre), one week of shadowing six focus individuals (veterinary surgeons, veterinary nurses and administrators) and a final week consisting of semistructured interviews regarding teamwork. Errors emerged as a finding of the study. The definition of errors was inclusive, pertaining to inputs or omitted actions with potential adverse outcomes for patients, clients or the practice. The 40 identified instances could be grouped into clinical errors (dosing/drugs, surgical preparation, lack of follow-up), lost item errors, and most frequently, communication errors (records, procedures, missing face-to-face communication, mistakes within face-to-face communication). The qualitative nature of the study allowed the underlying cause of the errors to be explored. In addition to some individual mistakes, system faults were identified as a major cause of errors. Observed examples and interviews demonstrated several challenges to interprofessional teamworking which may cause errors, including: lack of time, part-time staff leading to frequent handovers, branch differences and individual veterinary surgeon work preferences. Lessons are drawn for building better veterinary teams and implications for Disciplinary Proceedings considered. British Veterinary Association.
Schnock, Kumiko O; Biggs, Bonnie; Fladger, Anne; Bates, David W; Rozenblum, Ronen
2017-02-22
Retained surgical instruments (RSI) are one of the most serious preventable complications in operating room settings, potentially leading to profound adverse effects for patients, as well as costly legal and financial consequences for hospitals. Safety measures to eliminate RSIs have been widely adopted in the United States and abroad, but despite widespread efforts, medical errors with RSI have not been eliminated. Through a systematic review of recent studies, we aimed to identify the impact of radio frequency identification (RFID) technology on reducing RSI errors and improving patient safety. A literature search on the effects of RFID technology on RSI error reduction was conducted in PubMed and CINAHL (2000-2016). Relevant articles were selected and reviewed by 4 researchers. After the literature search, 385 articles were identified and the full texts of the 88 articles were assessed for eligibility. Of these, 5 articles were included to evaluate the benefits and drawbacks of using RFID for preventing RSI-related errors. The use of RFID resulted in rapid detection of RSI through body tissue with high accuracy rates, reducing risk of counting errors and improving workflow. Based on the existing literature, RFID technology seems to have the potential to substantially improve patient safety by reducing RSI errors, although the body of evidence is currently limited. Better designed research studies are needed to get a clear understanding of this domain and to find new opportunities to use this technology and improve patient safety.
Measuring diagnoses: ICD code accuracy.
O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M
2005-10-01
To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Main error sources along the "patient trajectory" include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the "paper trail" include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways.
Evaluation of drug administration errors in a teaching hospital
2012-01-01
Background Medication errors can occur at any of the three steps of the medication use process: prescribing, dispensing and administration. We aimed to determine the incidence, type and clinical importance of drug administration errors and to identify risk factors. Methods Prospective study based on disguised observation technique in four wards in a teaching hospital in Paris, France (800 beds). A pharmacist accompanied nurses and witnessed the preparation and administration of drugs to all patients during the three drug rounds on each of six days per ward. Main outcomes were number, type and clinical importance of errors and associated risk factors. Drug administration error rate was calculated with and without wrong time errors. Relationship between the occurrence of errors and potential risk factors were investigated using logistic regression models with random effects. Results Twenty-eight nurses caring for 108 patients were observed. Among 1501 opportunities for error, 415 administrations (430 errors) with one or more errors were detected (27.6%). There were 312 wrong time errors, ten simultaneously with another type of error, resulting in an error rate without wrong time error of 7.5% (113/1501). The most frequently administered drugs were the cardiovascular drugs (425/1501, 28.3%). The highest risks of error in a drug administration were for dermatological drugs. No potentially life-threatening errors were witnessed and 6% of errors were classified as having a serious or significant impact on patients (mainly omission). In multivariate analysis, the occurrence of errors was associated with drug administration route, drug classification (ATC) and the number of patient under the nurse's care. Conclusion Medication administration errors are frequent. The identification of its determinants helps to undertake designed interventions. PMID:22409837
Evaluation of drug administration errors in a teaching hospital.
Berdot, Sarah; Sabatier, Brigitte; Gillaizeau, Florence; Caruba, Thibaut; Prognon, Patrice; Durieux, Pierre
2012-03-12
Medication errors can occur at any of the three steps of the medication use process: prescribing, dispensing and administration. We aimed to determine the incidence, type and clinical importance of drug administration errors and to identify risk factors. Prospective study based on disguised observation technique in four wards in a teaching hospital in Paris, France (800 beds). A pharmacist accompanied nurses and witnessed the preparation and administration of drugs to all patients during the three drug rounds on each of six days per ward. Main outcomes were number, type and clinical importance of errors and associated risk factors. Drug administration error rate was calculated with and without wrong time errors. Relationship between the occurrence of errors and potential risk factors were investigated using logistic regression models with random effects. Twenty-eight nurses caring for 108 patients were observed. Among 1501 opportunities for error, 415 administrations (430 errors) with one or more errors were detected (27.6%). There were 312 wrong time errors, ten simultaneously with another type of error, resulting in an error rate without wrong time error of 7.5% (113/1501). The most frequently administered drugs were the cardiovascular drugs (425/1501, 28.3%). The highest risks of error in a drug administration were for dermatological drugs. No potentially life-threatening errors were witnessed and 6% of errors were classified as having a serious or significant impact on patients (mainly omission). In multivariate analysis, the occurrence of errors was associated with drug administration route, drug classification (ATC) and the number of patient under the nurse's care. Medication administration errors are frequent. The identification of its determinants helps to undertake designed interventions.
Development and Assessment of a Medication Safety Measurement Program in a Long-Term Care Pharmacy.
Hertig, John B; Hultgren, Kyle E; Parks, Scott; Rondinelli, Rick
2016-02-01
Medication errors continue to be a major issue in the health care system, including in long-term care facilities. While many hospitals and health systems have developed methods to identify, track, and prevent these errors, long-term care facilities historically have not invested in these error-prevention strategies. The objective of this study was two-fold: 1) to develop a set of medication-safety process measures for dispensing in a long-term care pharmacy, and 2) to analyze the data from those measures to determine the relative safety of the process. The study was conducted at In Touch Pharmaceuticals in Valparaiso, Indiana. To assess the safety of the medication-use system, each step was documented using a comprehensive flowchart (process flow map) tool. Once completed and validated, the flowchart was used to complete a "failure modes and effects analysis" (FMEA) identifying ways a process may fail. Operational gaps found during FMEA were used to identify points of measurement. The research identified a set of eight measures as potential areas of failure; data were then collected on each one of these. More than 133,000 medication doses (opportunities for errors) were included in the study during the research time frame (April 1, 2014, and ended on June 4, 2014). Overall, there was an approximate order-entry error rate of 15.26%, with intravenous errors at 0.37%. A total of 21 errors migrated through the entire medication-use system. These 21 errors in 133,000 opportunities resulted in a final check error rate of 0.015%. A comprehensive medication-safety measurement program was designed and assessed. This study demonstrated the ability to detect medication errors in a long-term pharmacy setting, thereby making process improvements measureable. Future, larger, multi-site studies should be completed to test this measurement program.
Impact and quantification of the sources of error in DNA pooling designs.
Jawaid, A; Sham, P
2009-01-01
The analysis of genome wide variation offers the possibility of unravelling the genes involved in the pathogenesis of disease. Genome wide association studies are also particularly useful for identifying and validating targets for therapeutic intervention as well as for detecting markers for drug efficacy and side effects. The cost of such large-scale genetic association studies may be reduced substantially by the analysis of pooled DNA from multiple individuals. However, experimental errors inherent in pooling studies lead to a potential increase in the false positive rate and a loss in power compared to individual genotyping. Here we quantify various sources of experimental error using empirical data from typical pooling experiments and corresponding individual genotyping counts using two statistical methods. We provide analytical formulas for calculating these different errors in the absence of complete information, such as replicate pool formation, and for adjusting for the errors in the statistical analysis. We demonstrate that DNA pooling has the potential of estimating allele frequencies accurately, and adjusting the pooled allele frequency estimates for differential allelic amplification considerably improves accuracy. Estimates of the components of error show that differential allelic amplification is the most important contributor to the error variance in absolute allele frequency estimation, followed by allele frequency measurement and pool formation errors. Our results emphasise the importance of minimising experimental errors and obtaining correct error estimates in genetic association studies.
Ni, Yizhao; Lingren, Todd; Hall, Eric S; Leonard, Matthew; Melton, Kristin; Kirkendall, Eric S
2018-05-01
Timely identification of medication administration errors (MAEs) promises great benefits for mitigating medication errors and associated harm. Despite previous efforts utilizing computerized methods to monitor medication errors, sustaining effective and accurate detection of MAEs remains challenging. In this study, we developed a real-time MAE detection system and evaluated its performance prior to system integration into institutional workflows. Our prospective observational study included automated MAE detection of 10 high-risk medications and fluids for patients admitted to the neonatal intensive care unit at Cincinnati Children's Hospital Medical Center during a 4-month period. The automated system extracted real-time medication use information from the institutional electronic health records and identified MAEs using logic-based rules and natural language processing techniques. The MAE summary was delivered via a real-time messaging platform to promote reduction of patient exposure to potential harm. System performance was validated using a physician-generated gold standard of MAE events, and results were compared with those of current practice (incident reporting and trigger tools). Physicians identified 116 MAEs from 10 104 medication administrations during the study period. Compared to current practice, the sensitivity with automated MAE detection was improved significantly from 4.3% to 85.3% (P = .009), with a positive predictive value of 78.0%. Furthermore, the system showed potential to reduce patient exposure to harm, from 256 min to 35 min (P < .001). The automated system demonstrated improved capacity for identifying MAEs while guarding against alert fatigue. It also showed promise for reducing patient exposure to potential harm following MAE events.
A Quality Improvement Project to Decrease Human Milk Errors in the NICU.
Oza-Frank, Reena; Kachoria, Rashmi; Dail, James; Green, Jasmine; Walls, Krista; McClead, Richard E
2017-02-01
Ensuring safe human milk in the NICU is a complex process with many potential points for error, of which one of the most serious is administration of the wrong milk to the wrong infant. Our objective was to describe a quality improvement initiative that was associated with a reduction in human milk administration errors identified over a 6-year period in a typical, large NICU setting. We employed a quasi-experimental time series quality improvement initiative by using tools from the model for improvement, Six Sigma methodology, and evidence-based interventions. Scanned errors were identified from the human milk barcode medication administration system. Scanned errors of interest were wrong-milk-to-wrong-infant, expired-milk, or preparation errors. The scanned error rate and the impact of additional improvement interventions from 2009 to 2015 were monitored by using statistical process control charts. From 2009 to 2015, the total number of errors scanned declined from 97.1 per 1000 bottles to 10.8. Specifically, the number of expired milk error scans declined from 84.0 per 1000 bottles to 8.9. The number of preparation errors (4.8 per 1000 bottles to 2.2) and wrong-milk-to-wrong-infant errors scanned (8.3 per 1000 bottles to 2.0) also declined. By reducing the number of errors scanned, the number of opportunities for errors also decreased. Interventions that likely had the greatest impact on reducing the number of scanned errors included installation of bedside (versus centralized) scanners and dedicated staff to handle milk. Copyright © 2017 by the American Academy of Pediatrics.
Foster, J D; Miskovic, D; Allison, A S; Conti, J A; Ockrim, J; Cooper, E J; Hanna, G B; Francis, N K
2016-06-01
Laparoscopic rectal resection is technically challenging, with outcomes dependent upon technical performance. No robust objective assessment tool exists for laparoscopic rectal resection surgery. This study aimed to investigate the application of the objective clinical human reliability analysis (OCHRA) technique for assessing technical performance of laparoscopic rectal surgery and explore the validity and reliability of this technique. Laparoscopic rectal cancer resection operations were described in the format of a hierarchical task analysis. Potential technical errors were defined. The OCHRA technique was used to identify technical errors enacted in videos of twenty consecutive laparoscopic rectal cancer resection operations from a single site. The procedural task, spatial location, and circumstances of all identified errors were logged. Clinical validity was assessed through correlation with clinical outcomes; reliability was assessed by test-retest. A total of 335 execution errors identified, with a median 15 per operation. More errors were observed during pelvic tasks compared with abdominal tasks (p < 0.001). Within the pelvis, more errors were observed during dissection on the right side than the left (p = 0.03). Test-retest confirmed reliability (r = 0.97, p < 0.001). A significant correlation was observed between error frequency and mesorectal specimen quality (r s = 0.52, p = 0.02) and with blood loss (r s = 0.609, p = 0.004). OCHRA offers a valid and reliable method for evaluating technical performance of laparoscopic rectal surgery.
Missed opportunities for diagnosis: lessons learned from diagnostic errors in primary care.
Goyder, Clare R; Jones, Caroline H D; Heneghan, Carl J; Thompson, Matthew J
2015-12-01
Because of the difficulties inherent in diagnosis in primary care, it is inevitable that diagnostic errors will occur. However, despite the important consequences associated with diagnostic errors and their estimated high prevalence, teaching and research on diagnostic error is a neglected area. To ascertain the key learning points from GPs' experiences of diagnostic errors and approaches to clinical decision making associated with these. Secondary analysis of 36 qualitative interviews with GPs in Oxfordshire, UK. Two datasets of semi-structured interviews were combined. Questions focused on GPs' experiences of diagnosis and diagnostic errors (or near misses) in routine primary care and out of hours. Interviews were audiorecorded, transcribed verbatim, and analysed thematically. Learning points include GPs' reliance on 'pattern recognition' and the failure of this strategy to identify atypical presentations; the importance of considering all potentially serious conditions using a 'restricted rule out' approach; and identifying and acting on a sense of unease. Strategies to help manage uncertainty in primary care were also discussed. Learning from previous examples of diagnostic errors is essential if these events are to be reduced in the future and this should be incorporated into GP training. At a practice level, learning points from experiences of diagnostic errors should be discussed more frequently; and more should be done to integrate these lessons nationally to understand and characterise diagnostic errors. © British Journal of General Practice 2015.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kertzscher, Gustavo, E-mail: guke@dtu.dk; Andersen, Claus E., E-mail: clan@dtu.dk; Tanderup, Kari, E-mail: karitand@rm.dk
Purpose: This study presents an adaptive error detection algorithm (AEDA) for real-timein vivo point dosimetry during high dose rate (HDR) or pulsed dose rate (PDR) brachytherapy (BT) where the error identification, in contrast to existing approaches, does not depend on an a priori reconstruction of the dosimeter position. Instead, the treatment is judged based on dose rate comparisons between measurements and calculations of the most viable dosimeter position provided by the AEDA in a data driven approach. As a result, the AEDA compensates for false error cases related to systematic effects of the dosimeter position reconstruction. Given its nearly exclusivemore » dependence on stable dosimeter positioning, the AEDA allows for a substantially simplified and time efficient real-time in vivo BT dosimetry implementation. Methods: In the event of a measured potential treatment error, the AEDA proposes the most viable dosimeter position out of alternatives to the original reconstruction by means of a data driven matching procedure between dose rate distributions. If measured dose rates do not differ significantly from the most viable alternative, the initial error indication may be attributed to a mispositioned or misreconstructed dosimeter (false error). However, if the error declaration persists, no viable dosimeter position can be found to explain the error, hence the discrepancy is more likely to originate from a misplaced or misreconstructed source applicator or from erroneously connected source guide tubes (true error). Results: The AEDA applied on twoin vivo dosimetry implementations for pulsed dose rate BT demonstrated that the AEDA correctly described effects responsible for initial error indications. The AEDA was able to correctly identify the major part of all permutations of simulated guide tube swap errors and simulated shifts of individual needles from the original reconstruction. Unidentified errors corresponded to scenarios where the dosimeter position was sufficiently symmetric with respect to error and no-error source position constellations. The AEDA was able to correctly identify all false errors represented by mispositioned dosimeters contrary to an error detection algorithm relying on the original reconstruction. Conclusions: The study demonstrates that the AEDA error identification during HDR/PDR BT relies on a stable dosimeter position rather than on an accurate dosimeter reconstruction, and the AEDA’s capacity to distinguish between true and false error scenarios. The study further shows that the AEDA can offer guidance in decision making in the event of potential errors detected with real-timein vivo point dosimetry.« less
Using failure mode and effects analysis to plan implementation of smart i.v. pump technology.
Wetterneck, Tosha B; Skibinski, Kathleen A; Roberts, Tanita L; Kleppin, Susan M; Schroeder, Mark E; Enloe, Myra; Rough, Steven S; Hundt, Ann Schoofs; Carayon, Pascale
2006-08-15
Failure mode and effects analysis (FMEA) was used to evaluate a smart i.v. pump as it was implemented into a redesigned medication-use process. A multidisciplinary team conducted a FMEA to guide the implementation of a smart i.v. pump that was designed to prevent pump programming errors. The smart i.v. pump was equipped with a dose-error reduction system that included a pre-defined drug library in which dosage limits were set for each medication. Monitoring for potential failures and errors occurred for three months postimplementation of FMEA. Specific measures were used to determine the success of the actions that were implemented as a result of the FMEA. The FMEA process at the hospital identified key failure modes in the medication process with the use of the old and new pumps, and actions were taken to avoid errors and adverse events. I.V. pump software and hardware design changes were also recommended. Thirteen of the 18 failure modes reported in practice after pump implementation had been identified by the team. A beneficial outcome of FMEA was the development of a multidisciplinary team that provided the infrastructure for safe technology implementation and effective event investigation after implementation. With the continual updating of i.v. pump software and hardware after implementation, FMEA can be an important starting place for safe technology choice and implementation and can produce site experts to follow technology and process changes over time. FMEA was useful in identifying potential problems in the medication-use process with the implementation of new smart i.v. pumps. Monitoring for system failures and errors after implementation remains necessary.
Human Error Analysis in a Permit to Work System: A Case Study in a Chemical Plant
Jahangiri, Mehdi; Hoboubi, Naser; Rostamabadi, Akbar; Keshavarzi, Sareh; Hosseini, Ali Akbar
2015-01-01
Background A permit to work (PTW) is a formal written system to control certain types of work which are identified as potentially hazardous. However, human error in PTW processes can lead to an accident. Methods This cross-sectional, descriptive study was conducted to estimate the probability of human errors in PTW processes in a chemical plant in Iran. In the first stage, through interviewing the personnel and studying the procedure in the plant, the PTW process was analyzed using the hierarchical task analysis technique. In doing so, PTW was considered as a goal and detailed tasks to achieve the goal were analyzed. In the next step, the standardized plant analysis risk-human (SPAR-H) reliability analysis method was applied for estimation of human error probability. Results The mean probability of human error in the PTW system was estimated to be 0.11. The highest probability of human error in the PTW process was related to flammable gas testing (50.7%). Conclusion The SPAR-H method applied in this study could analyze and quantify the potential human errors and extract the required measures for reducing the error probabilities in PTW system. Some suggestions to reduce the likelihood of errors, especially in the field of modifying the performance shaping factors and dependencies among tasks are provided. PMID:27014485
Measuring Diagnoses: ICD Code Accuracy
O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M
2005-01-01
Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999
Incidence of speech recognition errors in the emergency department.
Goss, Foster R; Zhou, Li; Weiner, Scott G
2016-09-01
Physician use of computerized speech recognition (SR) technology has risen in recent years due to its ease of use and efficiency at the point of care. However, error rates between 10 and 23% have been observed, raising concern about the number of errors being entered into the permanent medical record, their impact on quality of care and medical liability that may arise. Our aim was to determine the incidence and types of SR errors introduced by this technology in the emergency department (ED). Level 1 emergency department with 42,000 visits/year in a tertiary academic teaching hospital. A random sample of 100 notes dictated by attending emergency physicians (EPs) using SR software was collected from the ED electronic health record between January and June 2012. Two board-certified EPs annotated the notes and conducted error analysis independently. An existing classification schema was adopted to classify errors into eight errors types. Critical errors deemed to potentially impact patient care were identified. There were 128 errors in total or 1.3 errors per note, and 14.8% (n=19) errors were judged to be critical. 71% of notes contained errors, and 15% contained one or more critical errors. Annunciation errors were the highest at 53.9% (n=69), followed by deletions at 18.0% (n=23) and added words at 11.7% (n=15). Nonsense errors, homonyms and spelling errors were present in 10.9% (n=14), 4.7% (n=6), and 0.8% (n=1) of notes, respectively. There were no suffix or dictionary errors. Inter-annotator agreement was 97.8%. This is the first estimate at classifying speech recognition errors in dictated emergency department notes. Speech recognition errors occur commonly with annunciation errors being the most frequent. Error rates were comparable if not lower than previous studies. 15% of errors were deemed critical, potentially leading to miscommunication that could affect patient care. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
General linear codes for fault-tolerant matrix operations on processor arrays
NASA Technical Reports Server (NTRS)
Nair, V. S. S.; Abraham, J. A.
1988-01-01
Various checksum codes have been suggested for fault-tolerant matrix computations on processor arrays. Use of these codes is limited due to potential roundoff and overflow errors. Numerical errors may also be misconstrued as errors due to physical faults in the system. In this a set of linear codes is identified which can be used for fault-tolerant matrix operations such as matrix addition, multiplication, transposition, and LU-decomposition, with minimum numerical error. Encoding schemes are given for some of the example codes which fall under the general set of codes. With the help of experiments, a rule of thumb for the selection of a particular code for a given application is derived.
Miller, Marlene R; Robinson, Karen A; Lubomski, Lisa H; Rinke, Michael L; Pronovost, Peter J
2007-01-01
Background Although children are at the greatest risk for medication errors, little is known about the overall epidemiology of these errors, where the gaps are in our knowledge, and to what extent national medication error reduction strategies focus on children. Objective To synthesise peer reviewed knowledge on children's medication errors and on recommendations to improve paediatric medication safety by a systematic literature review. Data sources PubMed, Embase and Cinahl from 1 January 2000 to 30 April 2005, and 11 national entities that have disseminated recommendations to improve medication safety. Study selection Inclusion criteria were peer reviewed original data in English language. Studies that did not separately report paediatric data were excluded. Data extraction Two reviewers screened articles for eligibility and for data extraction, and screened all national medication error reduction strategies for relevance to children. Data synthesis From 358 articles identified, 31 were included for data extraction. The definition of medication error was non‐uniform across the studies. Dispensing and administering errors were the most poorly and non‐uniformly evaluated. Overall, the distributional epidemiological estimates of the relative percentages of paediatric error types were: prescribing 3–37%, dispensing 5–58%, administering 72–75%, and documentation 17–21%. 26 unique recommendations for strategies to reduce medication errors were identified; none were based on paediatric evidence. Conclusions Medication errors occur across the entire spectrum of prescribing, dispensing, and administering, are common, and have a myriad of non‐evidence based potential reduction strategies. Further research in this area needs a firmer standardisation for items such as dose ranges and definitions of medication errors, broader scope beyond inpatient prescribing errors, and prioritisation of implementation of medication error reduction strategies. PMID:17403758
Sozda, Christopher N.; Larson, Michael J.; Kaufman, David A.S.; Schmalfuss, Ilona M.; Perlstein, William M.
2011-01-01
Continuous monitoring of one’s performance is invaluable for guiding behavior towards successful goal attainment by identifying deficits and strategically adjusting responses when performance is inadequate. In the present study, we exploited the advantages of event-related functional magnetic resonance imaging (fMRI) to examine brain activity associated with error-related processing after severe traumatic brain injury (sTBI). fMRI and behavioral data were acquired while 10 sTBI participants and 12 neurologically-healthy controls performed a task-switching cued-Stroop task. fMRI data were analyzed using a random-effects whole-brain voxel-wise general linear model and planned linear contrasts. Behaviorally, sTBI patients showed greater error-rate interference than neurologically-normal controls. fMRI data revealed that, compared to controls, sTBI patients showed greater magnitude error-related activation in the anterior cingulate cortex (ACC) and an increase in the overall spatial extent of error-related activation across cortical and subcortical regions. Implications for future research and potential limitations in conducting fMRI research in neurologically-impaired populations are discussed, as well as some potential benefits of employing multimodal imaging (e.g., fMRI and event-related potentials) of cognitive control processes in TBI. PMID:21756946
Sozda, Christopher N; Larson, Michael J; Kaufman, David A S; Schmalfuss, Ilona M; Perlstein, William M
2011-10-01
Continuous monitoring of one's performance is invaluable for guiding behavior towards successful goal attainment by identifying deficits and strategically adjusting responses when performance is inadequate. In the present study, we exploited the advantages of event-related functional magnetic resonance imaging (fMRI) to examine brain activity associated with error-related processing after severe traumatic brain injury (sTBI). fMRI and behavioral data were acquired while 10 sTBI participants and 12 neurologically-healthy controls performed a task-switching cued-Stroop task. fMRI data were analyzed using a random-effects whole-brain voxel-wise general linear model and planned linear contrasts. Behaviorally, sTBI patients showed greater error-rate interference than neurologically-normal controls. fMRI data revealed that, compared to controls, sTBI patients showed greater magnitude error-related activation in the anterior cingulate cortex (ACC) and an increase in the overall spatial extent of error-related activation across cortical and subcortical regions. Implications for future research and potential limitations in conducting fMRI research in neurologically-impaired populations are discussed, as well as some potential benefits of employing multimodal imaging (e.g., fMRI and event-related potentials) of cognitive control processes in TBI. Copyright © 2011 Elsevier B.V. All rights reserved.
Yang, Hsuan-Chia; Iqbal, Usman; Nguyen, Phung Anh; Lin, Shen-Hsien; Huang, Chih-Wei; Jian, Wen-Shan; Li, Yu-Chuan
2016-04-01
Medication errors such as potential inappropriate prescriptions would induce serious adverse drug events to patients. Information technology has the ability to prevent medication errors; however, the pharmacology of traditional Chinese medicine (TCM) is not as clear as in western medicine. The aim of this study was to apply the appropriateness of prescription (AOP) model to identify potential inappropriate TCM prescriptions. We used the association rule of mining techniques to analyze 14.5 million prescriptions from the Taiwan National Health Insurance Research Database. The disease and TCM (DTCM) and traditional Chinese medicine-traditional Chinese medicine (TCMM) associations are computed by their co-occurrence, and the associations' strength was measured as Q-values, which often referred to as interestingness or life values. By considering the number of Q-values, the AOP model was applied to identify the inappropriate prescriptions. Afterwards, three traditional Chinese physicians evaluated 1920 prescriptions and validated the detected outcomes from the AOP model. Out of 1920 prescriptions, 97.1% of positive predictive value and 19.5% of negative predictive value were shown by the system as compared with those by experts. The sensitivity analysis indicated that the negative predictive value could improve up to 27.5% when the model's threshold changed to 0.4. We successfully applied the AOP model to automatically identify potential inappropriate TCM prescriptions. This model could be a potential TCM clinical decision support system in order to improve drug safety and quality of care. Copyright © 2016 John Wiley & Sons, Ltd.
Threat and error management for anesthesiologists: a predictive risk taxonomy
Ruskin, Keith J.; Stiegler, Marjorie P.; Park, Kellie; Guffey, Patrick; Kurup, Viji; Chidester, Thomas
2015-01-01
Purpose of review Patient care in the operating room is a dynamic interaction that requires cooperation among team members and reliance upon sophisticated technology. Most human factors research in medicine has been focused on analyzing errors and implementing system-wide changes to prevent them from recurring. We describe a set of techniques that has been used successfully by the aviation industry to analyze errors and adverse events and explain how these techniques can be applied to patient care. Recent findings Threat and error management (TEM) describes adverse events in terms of risks or challenges that are present in an operational environment (threats) and the actions of specific personnel that potentiate or exacerbate those threats (errors). TEM is a technique widely used in aviation, and can be adapted for the use in a medical setting to predict high-risk situations and prevent errors in the perioperative period. A threat taxonomy is a novel way of classifying and predicting the hazards that can occur in the operating room. TEM can be used to identify error-producing situations, analyze adverse events, and design training scenarios. Summary TEM offers a multifaceted strategy for identifying hazards, reducing errors, and training physicians. A threat taxonomy may improve analysis of critical events with subsequent development of specific interventions, and may also serve as a framework for training programs in risk mitigation. PMID:24113268
Fault Injection Techniques and Tools
NASA Technical Reports Server (NTRS)
Hsueh, Mei-Chen; Tsai, Timothy K.; Iyer, Ravishankar K.
1997-01-01
Dependability evaluation involves the study of failures and errors. The destructive nature of a crash and long error latency make it difficult to identify the causes of failures in the operational environment. It is particularly hard to recreate a failure scenario for a large, complex system. To identify and understand potential failures, we use an experiment-based approach for studying the dependability of a system. Such an approach is applied not only during the conception and design phases, but also during the prototype and operational phases. To take an experiment-based approach, we must first understand a system's architecture, structure, and behavior. Specifically, we need to know its tolerance for faults and failures, including its built-in detection and recovery mechanisms, and we need specific instruments and tools to inject faults, create failures or errors, and monitor their effects.
[Responsibility due to medication errors in France: a study based on SHAM insurance data].
Theissen, A; Orban, J-C; Fuz, F; Guerin, J-P; Flavin, P; Albertini, S; Maricic, S; Saquet, D; Niccolai, P
2015-03-01
The safe medication practices at the hospital constitute a major public health problem. Drug supply chain is a complex process, potentially source of errors and damages for the patient. SHAM insurances are the biggest French provider of medical liability insurances and a relevant source of data on the health care complications. The main objective of the study was to analyze the type and cause of medication errors declared to SHAM and having led to a conviction by a court. We did a retrospective study on insurance claims provided by SHAM insurances with a medication error and leading to a condemnation over a 6-year period (between 2005 and 2010). Thirty-one cases were analysed, 21 for scheduled activity and 10 for emergency activity. Consequences of claims were mostly serious (12 deaths, 14 serious complications, 5 simple complications). The types of medication errors were a drug monitoring error (11 cases), an administration error (5 cases), an overdose (6 cases), an allergy (4 cases), a contraindication (3 cases) and an omission (2 cases). Intravenous route of administration was involved in 19 of 31 cases (61%). The causes identified by the court expert were an error related to service organization (11), an error related to medical practice (11) or nursing practice (13). Only one claim was due to the hospital pharmacy. The claim related to drug supply chain is infrequent but potentially serious. These data should help strengthen quality approach in risk management. Copyright © 2014 Elsevier Masson SAS. All rights reserved.
Detecting medication errors in the New Zealand pharmacovigilance database: a retrospective analysis.
Kunac, Desireé L; Tatley, Michael V
2011-01-01
Despite the traditional focus being adverse drug reactions (ADRs), pharmacovigilance centres have recently been identified as a potentially rich and important source of medication error data. To identify medication errors in the New Zealand Pharmacovigilance database (Centre for Adverse Reactions Monitoring [CARM]), and to describe the frequency and characteristics of these events. A retrospective analysis of the CARM pharmacovigilance database operated by the New Zealand Pharmacovigilance Centre was undertaken for the year 1 January-31 December 2007. All reports, excluding those relating to vaccines, clinical trials and pharmaceutical company reports, underwent a preventability assessment using predetermined criteria. Those events deemed preventable were subsequently classified to identify the degree of patient harm, type of error, stage of medication use process where the error occurred and origin of the error. A total of 1412 reports met the inclusion criteria and were reviewed, of which 4.3% (61/1412) were deemed preventable. Not all errors resulted in patient harm: 29.5% (18/61) were 'no harm' errors but 65.5% (40/61) of errors were deemed to have been associated with some degree of patient harm (preventable adverse drug events [ADEs]). For 5.0% (3/61) of events, the degree of patient harm was unable to be determined as the patient outcome was unknown. The majority of preventable ADEs (62.5% [25/40]) occurred in adults aged 65 years and older. The medication classes most involved in preventable ADEs were antibacterials for systemic use and anti-inflammatory agents, with gastrointestinal and respiratory system disorders the most common adverse events reported. For both preventable ADEs and 'no harm' events, most errors were incorrect dose and drug therapy monitoring problems consisting of failures in detection of significant drug interactions, past allergies or lack of necessary clinical monitoring. Preventable events were mostly related to the prescribing and administration stages of the medication use process, with the majority of errors 82.0% (50/61) deemed to have originated in the community setting. The CARM pharmacovigilance database includes medication errors, many of which were found to originate in the community setting and reported as ADRs. Error-prone situations were able to be identified, providing greater opportunity to improve patient safety. However, to enhance detection of medication errors by pharmacovigilance centres, reports should be prospectively reviewed for preventability and the reporting form revised to facilitate capture of important information that will provide meaningful insight into the nature of the underlying systems defects that caused the error.
Van de Vreede, Melita; McGrath, Anne; de Clifford, Jan
2018-05-14
Objective. The aim of the present study was to identify and quantify medication errors reportedly related to electronic medication management systems (eMMS) and those considered likely to occur more frequently with eMMS. This included developing a new classification system relevant to eMMS errors. Methods. Eight Victorian hospitals with eMMS participated in a retrospective audit of reported medication incidents from their incident reporting databases between May and July 2014. Site-appointed project officers submitted deidentified incidents they deemed new or likely to occur more frequently due to eMMS, together with the Incident Severity Rating (ISR). The authors reviewed and classified incidents. Results. There were 5826 medication-related incidents reported. In total, 93 (47 prescribing errors, 46 administration errors) were identified as new or potentially related to eMMS. Only one ISR2 (moderate) and no ISR1 (severe or death) errors were reported, so harm to patients in this 3-month period was minimal. The most commonly reported error types were 'human factors' and 'unfamiliarity or training' (70%) and 'cross-encounter or hybrid system errors' (22%). Conclusions. Although the results suggest that the errors reported were of low severity, organisations must remain vigilant to the risk of new errors and avoid the assumption that eMMS is the panacea to all medication error issues. What is known about the topic? eMMS have been shown to reduce some types of medication errors, but it has been reported that some new medication errors have been identified and some are likely to occur more frequently with eMMS. There are few published Australian studies that have reported on medication error types that are likely to occur more frequently with eMMS in more than one organisation and that include administration and prescribing errors. What does this paper add? This paper includes a new simple classification system for eMMS that is useful and outlines the most commonly reported incident types and can inform organisations and vendors on possible eMMS improvements. The paper suggests a new classification system for eMMS medication errors. What are the implications for practitioners? The results of the present study will highlight to organisations the need for ongoing review of system design, refinement of workflow issues, staff education and training and reporting and monitoring of errors.
Luijten, Maartje; Machielsen, Marise W.J.; Veltman, Dick J.; Hester, Robert; de Haan, Lieuwe; Franken, Ingmar H.A.
2014-01-01
Background Several current theories emphasize the role of cognitive control in addiction. The present review evaluates neural deficits in the domains of inhibitory control and error processing in individuals with substance dependence and in those showing excessive addiction-like behaviours. The combined evaluation of event-related potential (ERP) and functional magnetic resonance imaging (fMRI) findings in the present review offers unique information on neural deficits in addicted individuals. Methods We selected 19 ERP and 22 fMRI studies using stop-signal, go/no-go or Flanker paradigms based on a search of PubMed and Embase. Results The most consistent findings in addicted individuals relative to healthy controls were lower N2, error-related negativity and error positivity amplitudes as well as hypoactivation in the anterior cingulate cortex (ACC), inferior frontal gyrus and dorsolateral prefrontal cortex. These neural deficits, however, were not always associated with impaired task performance. With regard to behavioural addictions, some evidence has been found for similar neural deficits; however, studies are scarce and results are not yet conclusive. Differences among the major classes of substances of abuse were identified and involve stronger neural responses to errors in individuals with alcohol dependence versus weaker neural responses to errors in other substance-dependent populations. Limitations Task design and analysis techniques vary across studies, thereby reducing comparability among studies and the potential of clinical use of these measures. Conclusion Current addiction theories were supported by identifying consistent abnormalities in prefrontal brain function in individuals with addiction. An integrative model is proposed, suggesting that neural deficits in the dorsal ACC may constitute a hallmark neurocognitive deficit underlying addictive behaviours, such as loss of control. PMID:24359877
Identification of priorities for medication safety in neonatal intensive care.
Kunac, Desireé L; Reith, David M
2005-01-01
Although neonates are reported to be at greater risk of medication error than infants and older children, little is known about the causes and characteristics of error in this patient group. Failure mode and effects analysis (FMEA) is a technique used in industry to evaluate system safety and identify potential hazards in advance. The aim of this study was to identify and prioritize potential failures in the neonatal intensive care unit (NICU) medication use process through application of FMEA. Using the FMEA framework and a systems-based approach, an eight-member multidisciplinary panel worked as a team to create a flow diagram of the neonatal unit medication use process. Then by brainstorming, the panel identified all potential failures, their causes and their effects at each step in the process. Each panel member independently rated failures based on occurrence, severity and likelihood of detection to allow calculation of a risk priority score (RPS). The panel identified 72 failures, with 193 associated causes and effects. Vulnerabilities were found to be distributed across the entire process, but multiple failures and associated causes were possible when prescribing the medication and when preparing the drug for administration. The top ranking issue was a perceived lack of awareness of medication safety issues (RPS score 273), due to a lack of medication safety training. The next highest ranking issues were found to occur at the administration stage. Common potential failures related to errors in the dose, timing of administration, infusion pump settings and route of administration. Perceived causes were multiple, but were largely associated with unsafe systems for medication preparation and storage in the unit, variable staff skill level and lack of computerised technology. Interventions to decrease medication-related adverse events in the NICU should aim to increase staff awareness of medication safety issues and focus on medication administration processes.
Errorless Learning in Cognitive Rehabilitation: A Critical Review
Middleton, Erica L.; Schwartz, Myrna F.
2012-01-01
Cognitive rehabilitation research is increasingly exploring errorless learning interventions, which prioritize the avoidance of errors during treatment. The errorless learning approach was originally developed for patients with severe anterograde amnesia, who were deemed to be at particular risk for error learning. Errorless learning has since been investigated in other memory-impaired populations (e.g., Alzheimer's disease) and acquired aphasia. In typical errorless training, target information is presented to the participant for study or immediate reproduction, a method that prevents participants from attempting to retrieve target information from long-term memory (i.e., retrieval practice). However, assuring error elimination by preventing difficult (and error-permitting) retrieval practice is a potential major drawback of the errorless approach. This review begins with discussion of research in the psychology of learning and memory that demonstrates the importance of difficult (and potentially errorful) retrieval practice for robust learning and prolonged performance gains. We then review treatment research comparing errorless and errorful methods in amnesia and aphasia, where only the latter provides (difficult) retrieval practice opportunities. In each clinical domain we find the advantage of the errorless approach is limited and may be offset by the therapeutic potential of retrieval practice. Gaps in current knowledge are identified that preclude strong conclusions regarding a preference for errorless treatments over methods that prioritize difficult retrieval practice. We offer recommendations for future research aimed at a strong test of errorless learning treatments, which involves direct comparison with methods where retrieval practice effects are maximized for long-term gains. PMID:22247957
Measuring the Lense-Thirring precession using a second Lageos satellite
NASA Technical Reports Server (NTRS)
Tapley, B. D.; Ciufolini, I.
1989-01-01
A complete numerical simulation and error analysis was performed for the proposed experiment with the objective of establishing an accurate assessment of the feasibility and the potential accuracy of the measurement of the Lense-Thirring precession. Consideration was given to identifying the error sources which limit the accuracy of the experiment and proposing procedures for eliminating or reducing the effect of these errors. Analytic investigations were conducted to study the effects of major error sources with the objective of providing error bounds on the experiment. The analysis of realistic simulated data is used to demonstrate that satellite laser ranging of two Lageos satellites, orbiting with supplemental inclinations, collected for a period of 3 years or more, can be used to verify the Lense-Thirring precession. A comprehensive covariance analysis for the solution was also developed.
[Failure modes and effects analysis in the prescription, validation and dispensing process].
Delgado Silveira, E; Alvarez Díaz, A; Pérez Menéndez-Conde, C; Serna Pérez, J; Rodríguez Sagrado, M A; Bermejo Vicedo, T
2012-01-01
To apply a failure modes and effects analysis to the prescription, validation and dispensing process for hospitalised patients. A work group analysed all of the stages included in the process from prescription to dispensing, identifying the most critical errors and establishing potential failure modes which could produce a mistake. The possible causes, their potential effects, and the existing control systems were analysed to try and stop them from developing. The Hazard Score was calculated, choosing those that were ≥ 8, and a Severity Index = 4 was selected independently of the hazard Score value. Corrective measures and an implementation plan were proposed. A flow diagram that describes the whole process was obtained. A risk analysis was conducted of the chosen critical points, indicating: failure mode, cause, effect, severity, probability, Hazard Score, suggested preventative measure and strategy to achieve so. Failure modes chosen: Prescription on the nurse's form; progress or treatment order (paper); Prescription to incorrect patient; Transcription error by nursing staff and pharmacist; Error preparing the trolley. By applying a failure modes and effects analysis to the prescription, validation and dispensing process, we have been able to identify critical aspects, the stages in which errors may occur and the causes. It has allowed us to analyse the effects on the safety of the process, and establish measures to prevent or reduce them. Copyright © 2010 SEFH. Published by Elsevier Espana. All rights reserved.
Latent error detection: A golden two hours for detection.
Saward, Justin R E; Stanton, Neville A
2017-03-01
Undetected error in safety critical contexts generates a latent condition that can contribute to a future safety failure. The detection of latent errors post-task completion is observed in naval air engineers using a diary to record work-related latent error detection (LED) events. A systems view is combined with multi-process theories to explore sociotechnical factors associated with LED. Perception of cues in different environments facilitates successful LED, for which the deliberate review of past tasks within two hours of the error occurring and whilst remaining in the same or similar sociotechnical environment to that which the error occurred appears most effective. Identified ergonomic interventions offer potential mitigation for latent errors; particularly in simple everyday habitual tasks. It is thought safety critical organisations should look to engineer further resilience through the application of LED techniques that engage with system cues across the entire sociotechnical environment, rather than relying on consistent human performance. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
Kumar, Savitha Anil; Jayanna, Prashanth; Prabhudesai, Shilpa; Kumar, Ajai
2014-01-01
To collect and tabulate errors and nonconformities in the preanalytical, analytical, and postanalytical process phases in a diagnostic clinical laboratory that supports a super-specialty cancer center in India, and identify areas of potential improvement in patient services. We collected data from our laboratory during a period of 24 months. Departments in the study included clinical biochemistry, hematology, clinical pathology, microbiology and serology, surgical pathology, and molecular pathology. We had initiated quality assessment based on international standards in our laboratory in 2010, with the aim of obtaining accreditation by national and international governing bodies. We followed the guidelines specified by International Organization for Standardization (ISO) 15189:2007 to identify noncompliant elements of our processes. Among a total of 144,030 specimens that our referral laboratory received during the 2-year period of our study, we uncovered an overall error rate for all 3 process phases of 1.23%; all of our error rates closely approximated the results from our peer institutions. Errors were most common in the preanalytical phase in both years of study; preanalytical- and postanalytical-phase errors constituted more than 90% of all errors. Further improvements are warranted in laboratory services and are contingent on adequate training and interdepartmental communication and cooperation. Copyright© by the American Society for Clinical Pathology (ASCP).
NASA Astrophysics Data System (ADS)
Mazurowski, Maciej A.; Zhang, Jing; Lo, Joseph Y.; Kuzmiak, Cherie M.; Ghate, Sujata V.; Yoon, Sora
2014-03-01
Providing high quality mammography education to radiology trainees is essential, as good interpretation skills potentially ensure the highest benefit of screening mammography for patients. We have previously proposed a computer-aided education system that utilizes trainee models, which relate human-assessed image characteristics to interpretation error. We proposed that these models be used to identify the most difficult and therefore the most educationally useful cases for each trainee. In this study, as a next step in our research, we propose to build trainee models that utilize features that are automatically extracted from images using computer vision algorithms. To predict error, we used a logistic regression which accepts imaging features as input and returns error as output. Reader data from 3 experts and 3 trainees were used. Receiver operating characteristic analysis was applied to evaluate the proposed trainee models. Our experiments showed that, for three trainees, our models were able to predict error better than chance. This is an important step in the development of adaptive computer-aided education systems since computer-extracted features will allow for faster and more extensive search of imaging databases in order to identify the most educationally beneficial cases.
Borycki, Elizabeth M; Kushniruk, Andre W; Kuwata, Shigeki; Kannry, Joseph
2011-01-01
Electronic health records (EHRs) promise to improve and streamline healthcare through electronic entry and retrieval of patient data. Furthermore, based on a number of studies showing their positive benefits, they promise to reduce medical error and make healthcare safer. However, a growing body of literature has clearly documented that if EHRS are not designed properly and with usability as an important goal in their design, rather than reducing error, EHR deployment has the potential to actually increase medical error. In this paper we describe our approach to engineering (and reengineering) EHRs in order to increase their beneficial potential while at the same time improving their safety. The approach described in this paper involves an integration of the methods of usability analysis with video analysis of end users interacting with EHR systems and extends the evaluation of the usability of EHRs to include the assessment of the impact of these systems on work practices. Using clinical simulations, we analyze human-computer interaction in real healthcare settings (in a portable, low-cost and high fidelity manner) and include both artificial and naturalistic data collection to identify potential usability problems and sources of technology-induced error prior to widespread system release. Two case studies where the methods we have developed and refined have been applied at different levels of user-computer interaction are described.
Water displacement leg volumetry in clinical studies - A discussion of error sources
2010-01-01
Background Water displacement leg volumetry is a highly reproducible method, allowing the confirmation of efficacy of vasoactive substances. Nevertheless errors of its execution and the selection of unsuitable patients are likely to negatively affect the outcome of clinical studies in chronic venous insufficiency (CVI). Discussion Placebo controlled double-blind drug studies in CVI were searched (Cochrane Review 2005, MedLine Search until December 2007) and assessed with regard to efficacy (volume reduction of the leg), patient characteristics, and potential methodological error sources. Almost every second study reported only small drug effects (≤ 30 mL volume reduction). As the most relevant error source the conduct of volumetry was identified. Because the practical use of available equipment varies, volume differences of more than 300 mL - which is a multifold of a potential treatment effect - have been reported between consecutive measurements. Other potential error sources were insufficient patient guidance or difficulties with the transition from the Widmer CVI classification to the CEAP (Clinical Etiological Anatomical Pathophysiological) grading. Summary Patients should be properly diagnosed with CVI and selected for stable oedema and further clinical symptoms relevant for the specific study. Centres require a thorough training on the use of the volumeter and on patient guidance. Volumetry should be performed under constant conditions. The reproducibility of short term repeat measurements has to be ensured. PMID:20070899
An investigation into false-negative transthoracic fine needle aspiration and core biopsy specimens.
Minot, Douglas M; Gilman, Elizabeth A; Aubry, Marie-Christine; Voss, Jesse S; Van Epps, Sarah G; Tuve, Delores J; Sciallis, Andrew P; Henry, Michael R; Salomao, Diva R; Lee, Peter; Carlson, Stephanie K; Clayton, Amy C
2014-12-01
Transthoracic fine needle aspiration (TFNA)/core needle biopsy (CNB) under computed tomography (CT) guidance has proved useful in the assessment of pulmonary nodules. We sought to determine the TFNA false-negative (FN) rate at our institution and identify potential causes of FN diagnoses. Medical records were reviewed from 1,043 consecutive patients who underwent CT-guided TFNA with or without CNB of lung nodules over a 5-year time period (2003-2007). Thirty-seven FN cases of "negative" TFNA/CNB with malignant outcome were identified with 36 cases available for review, of which 35 had a corresponding CNB. Cases were reviewed independently (blinded to original diagnosis) by three pathologists with 15 age- and sex-matched positive and negative controls. Diagnosis (i.e., nondiagnostic, negative or positive for malignancy, atypical or suspicious) and qualitative assessments were recorded. Consensus diagnosis was suspicious or positive in 10 (28%) of 36 TFNA cases and suspicious in 1 (3%) of 35 CNB cases, indicating potential interpretive errors. Of the 11 interpretive errors (including both suspicious and positive cases), 8 were adenocarcinomas, 1 squamous cell carcinoma, 1 metastatic renal cell carcinoma, and 1 lymphoma. The remaining 25 FN cases (69.4%) were considered sampling errors and consisted of 7 adenocarcinomas, 3 nonsmall cell carcinomas, 3 lymphomas, 2 squamous cell carcinomas, and 2 renal cell carcinomas. Interpretive and sampling error cases were more likely to abut the pleura, while histopathologically, they tended to be necrotic and air-dried. The overall FN rate in this patient cohort is 3.5% (1.1% interpretive and 2.4% sampling errors). © 2014 Wiley Periodicals, Inc.
A new model of Ishikawa diagram for quality assessment
NASA Astrophysics Data System (ADS)
Liliana, Luca
2016-11-01
The paper presents the results of a study concerning the use of the Ishikawa diagram in analyzing the causes that determine errors in the evaluation of theparts precision in the machine construction field. The studied problem was"errors in the evaluation of partsprecision” and this constitutes the head of the Ishikawa diagram skeleton.All the possible, main and secondary causes that could generate the studied problem were identified. The most known Ishikawa models are 4M, 5M, 6M, the initials being in order: materials, methods, man, machines, mother nature, measurement. The paper shows the potential causes of the studied problem, which were firstly grouped in three categories, as follows: causes that lead to errors in assessing the dimensional accuracy, causes that determine errors in the evaluation of shape and position abnormalities and causes for errors in roughness evaluation. We took into account the main components of parts precision in the machine construction field. For each of the three categories of causes there were distributed potential secondary causes on groups of M (man, methods, machines, materials, environment/ medio ambiente-sp.). We opted for a new model of Ishikawa diagram, resulting from the composition of three fish skeletons corresponding to the main categories of parts accuracy.
Giardina, M; Castiglia, F; Tomarchio, E
2014-12-01
Failure mode, effects and criticality analysis (FMECA) is a safety technique extensively used in many different industrial fields to identify and prevent potential failures. In the application of traditional FMECA, the risk priority number (RPN) is determined to rank the failure modes; however, the method has been criticised for having several weaknesses. Moreover, it is unable to adequately deal with human errors or negligence. In this paper, a new versatile fuzzy rule-based assessment model is proposed to evaluate the RPN index to rank both component failure and human error. The proposed methodology is applied to potential radiological over-exposure of patients during high-dose-rate brachytherapy treatments. The critical analysis of the results can provide recommendations and suggestions regarding safety provisions for the equipment and procedures required to reduce the occurrence of accidental events.
Chilcott, J; Tappenden, P; Rawdin, A; Johnson, M; Kaltenthaler, E; Paisley, S; Papaioannou, D; Shippam, A
2010-05-01
Health policy decisions must be relevant, evidence-based and transparent. Decision-analytic modelling supports this process but its role is reliant on its credibility. Errors in mathematical decision models or simulation exercises are unavoidable but little attention has been paid to processes in model development. Numerous error avoidance/identification strategies could be adopted but it is difficult to evaluate the merits of strategies for improving the credibility of models without first developing an understanding of error types and causes. The study aims to describe the current comprehension of errors in the HTA modelling community and generate a taxonomy of model errors. Four primary objectives are to: (1) describe the current understanding of errors in HTA modelling; (2) understand current processes applied by the technology assessment community for avoiding errors in development, debugging and critically appraising models for errors; (3) use HTA modellers' perceptions of model errors with the wider non-HTA literature to develop a taxonomy of model errors; and (4) explore potential methods and procedures to reduce the occurrence of errors in models. It also describes the model development process as perceived by practitioners working within the HTA community. A methodological review was undertaken using an iterative search methodology. Exploratory searches informed the scope of interviews; later searches focused on issues arising from the interviews. Searches were undertaken in February 2008 and January 2009. In-depth qualitative interviews were performed with 12 HTA modellers from academic and commercial modelling sectors. All qualitative data were analysed using the Framework approach. Descriptive and explanatory accounts were used to interrogate the data within and across themes and subthemes: organisation, roles and communication; the model development process; definition of error; types of model error; strategies for avoiding errors; strategies for identifying errors; and barriers and facilitators. There was no common language in the discussion of modelling errors and there was inconsistency in the perceived boundaries of what constitutes an error. Asked about the definition of model error, there was a tendency for interviewees to exclude matters of judgement from being errors and focus on 'slips' and 'lapses', but discussion of slips and lapses comprised less than 20% of the discussion on types of errors. Interviewees devoted 70% of the discussion to softer elements of the process of defining the decision question and conceptual modelling, mostly the realms of judgement, skills, experience and training. The original focus concerned model errors, but it may be more useful to refer to modelling risks. Several interviewees discussed concepts of validation and verification, with notable consistency in interpretation: verification meaning the process of ensuring that the computer model correctly implemented the intended model, whereas validation means the process of ensuring that a model is fit for purpose. Methodological literature on verification and validation of models makes reference to the Hermeneutic philosophical position, highlighting that the concept of model validation should not be externalized from the decision-makers and the decision-making process. Interviewees demonstrated examples of all major error types identified in the literature: errors in the description of the decision problem, in model structure, in use of evidence, in implementation of the model, in operation of the model, and in presentation and understanding of results. The HTA error classifications were compared against existing classifications of model errors in the literature. A range of techniques and processes are currently used to avoid errors in HTA models: engaging with clinical experts, clients and decision-makers to ensure mutual understanding, producing written documentation of the proposed model, explicit conceptual modelling, stepping through skeleton models with experts, ensuring transparency in reporting, adopting standard housekeeping techniques, and ensuring that those parties involved in the model development process have sufficient and relevant training. Clarity and mutual understanding were identified as key issues. However, their current implementation is not framed within an overall strategy for structuring complex problems. Some of the questioning may have biased interviewees responses but as all interviewees were represented in the analysis no rebalancing of the report was deemed necessary. A potential weakness of the literature review was its focus on spreadsheet and program development rather than specifically on model development. It should also be noted that the identified literature concerning programming errors was very narrow despite broad searches being undertaken. Published definitions of overall model validity comprising conceptual model validation, verification of the computer model, and operational validity of the use of the model in addressing the real-world problem are consistent with the views expressed by the HTA community and are therefore recommended as the basis for further discussions of model credibility. Such discussions should focus on risks, including errors of implementation, errors in matters of judgement and violations. Discussions of modelling risks should reflect the potentially complex network of cognitive breakdowns that lead to errors in models and existing research on the cognitive basis of human error should be included in an examination of modelling errors. There is a need to develop a better understanding of the skills requirements for the development, operation and use of HTA models. Interaction between modeller and client in developing mutual understanding of a model establishes that model's significance and its warranty. This highlights that model credibility is the central concern of decision-makers using models so it is crucial that the concept of model validation should not be externalized from the decision-makers and the decision-making process. Recommendations for future research would be studies of verification and validation; the model development process; and identification of modifications to the modelling process with the aim of preventing the occurrence of errors and improving the identification of errors in models.
Mumma, Joel M; Durso, Francis T; Ferguson, Ashley N; Gipson, Christina L; Casanova, Lisa; Erukunuakpor, Kimberly; Kraft, Colleen S; Walsh, Victoria L; Zimring, Craig; DuBose, Jennifer; Jacob, Jesse T
2018-03-05
Doffing protocols for personal protective equipment (PPE) are critical for keeping healthcare workers (HCWs) safe during care of patients with Ebola virus disease. We assessed the relationship between errors and self-contamination during doffing. Eleven HCWs experienced with doffing Ebola-level PPE participated in simulations in which HCWs donned PPE marked with surrogate viruses (ɸ6 and MS2), completed a clinical task, and were assessed for contamination after doffing. Simulations were video recorded, and a failure modes and effects analysis and fault tree analyses were performed to identify errors during doffing, quantify their risk (risk index), and predict contamination data. Fifty-one types of errors were identified, many having the potential to spread contamination. Hand hygiene and removing the powered air purifying respirator (PAPR) hood had the highest total risk indexes (111 and 70, respectively) and number of types of errors (9 and 13, respectively). ɸ6 was detected on 10% of scrubs and the fault tree predicted a 10.4% contamination rate, likely occurring when the PAPR hood inadvertently contacted scrubs during removal. MS2 was detected on 10% of hands, 20% of scrubs, and 70% of inner gloves and the predicted rates were 7.3%, 19.4%, 73.4%, respectively. Fault trees for MS2 and ɸ6 contamination suggested similar pathways. Ebola-level PPE can both protect and put HCWs at risk for self-contamination throughout the doffing process, even among experienced HCWs doffing with a trained observer. Human factors methodologies can identify error-prone steps, delineate the relationship between errors and self-contamination, and suggest remediation strategies.
49 CFR 193.2509 - Emergency procedures.
Code of Federal Regulations, 2011 CFR
2011-10-01
... plant; (ii) Potential hazards at the plant, including fires; (iii) Communication and emergency control... plant due to operating malfunctions, structural collapse, personnel error, forces of nature, and activities adjacent to the plant. (b) To adequately handle each type of emergency identified under paragraph...
Oldland, Alan R.; May, Sondra K.; Barber, Gerard R.; Stolpman, Nancy M.
2015-01-01
Purpose: To measure the effects associated with sequential implementation of electronic medication storage and inventory systems and product verification devices on pharmacy technical accuracy and rates of potential medication dispensing errors in an academic medical center. Methods: During four 28-day periods of observation, pharmacists recorded all technical errors identified at the final visual check of pharmaceuticals prior to dispensing. Technical filling errors involving deviations from order-specific selection of product, dosage form, strength, or quantity were documented when dispensing medications using (a) a conventional unit dose (UD) drug distribution system, (b) an electronic storage and inventory system utilizing automated dispensing cabinets (ADCs) within the pharmacy, (c) ADCs combined with barcode (BC) verification, and (d) ADCs and BC verification utilized with changes in product labeling and individualized personnel training in systems application. Results: Using a conventional UD system, the overall incidence of technical error was 0.157% (24/15,271). Following implementation of ADCs, the comparative overall incidence of technical error was 0.135% (10/7,379; P = .841). Following implementation of BC scanning, the comparative overall incidence of technical error was 0.137% (27/19,708; P = .729). Subsequent changes in product labeling and intensified staff training in the use of BC systems was associated with a decrease in the rate of technical error to 0.050% (13/26,200; P = .002). Conclusions: Pharmacy ADCs and BC systems provide complementary effects that improve technical accuracy and reduce the incidence of potential medication dispensing errors if this technology is used with comprehensive personnel training. PMID:25684799
Oldland, Alan R; Golightly, Larry K; May, Sondra K; Barber, Gerard R; Stolpman, Nancy M
2015-01-01
To measure the effects associated with sequential implementation of electronic medication storage and inventory systems and product verification devices on pharmacy technical accuracy and rates of potential medication dispensing errors in an academic medical center. During four 28-day periods of observation, pharmacists recorded all technical errors identified at the final visual check of pharmaceuticals prior to dispensing. Technical filling errors involving deviations from order-specific selection of product, dosage form, strength, or quantity were documented when dispensing medications using (a) a conventional unit dose (UD) drug distribution system, (b) an electronic storage and inventory system utilizing automated dispensing cabinets (ADCs) within the pharmacy, (c) ADCs combined with barcode (BC) verification, and (d) ADCs and BC verification utilized with changes in product labeling and individualized personnel training in systems application. Using a conventional UD system, the overall incidence of technical error was 0.157% (24/15,271). Following implementation of ADCs, the comparative overall incidence of technical error was 0.135% (10/7,379; P = .841). Following implementation of BC scanning, the comparative overall incidence of technical error was 0.137% (27/19,708; P = .729). Subsequent changes in product labeling and intensified staff training in the use of BC systems was associated with a decrease in the rate of technical error to 0.050% (13/26,200; P = .002). Pharmacy ADCs and BC systems provide complementary effects that improve technical accuracy and reduce the incidence of potential medication dispensing errors if this technology is used with comprehensive personnel training.
Real-time recognition of feedback error-related potentials during a time-estimation task.
Lopez-Larraz, Eduardo; Iturrate, Iñaki; Montesano, Luis; Minguez, Javier
2010-01-01
Feedback error-related potentials are a promising brain process in the field of rehabilitation since they are related to human learning. Due to the fact that many therapeutic strategies rely on the presentation of feedback stimuli, potentials generated by these stimuli could be used to ameliorate the patient's progress. In this paper we propose a method that can identify, in real-time, feedback evoked potentials in a time-estimation task. We have tested our system with five participants in two different days with a separation of three weeks between them, achieving a mean single-trial detection performance of 71.62% for real-time recognition, and 78.08% in offline classification. Additionally, an analysis of the stability of the signal between the two days is performed, suggesting that the feedback responses are stable enough to be used without the needing of training again the user.
Validation Relaxation: A Quality Assurance Strategy for Electronic Data Collection
Gordon, Nicholas; Griffiths, Thomas; Kraemer, John D; Siedner, Mark J
2017-01-01
Background The use of mobile devices for data collection in developing world settings is becoming increasingly common and may offer advantages in data collection quality and efficiency relative to paper-based methods. However, mobile data collection systems can hamper many standard quality assurance techniques due to the lack of a hardcopy backup of data. Consequently, mobile health data collection platforms have the potential to generate datasets that appear valid, but are susceptible to unidentified database design flaws, areas of miscomprehension by enumerators, and data recording errors. Objective We describe the design and evaluation of a strategy for estimating data error rates and assessing enumerator performance during electronic data collection, which we term “validation relaxation.” Validation relaxation involves the intentional omission of data validation features for select questions to allow for data recording errors to be committed, detected, and monitored. Methods We analyzed data collected during a cluster sample population survey in rural Liberia using an electronic data collection system (Open Data Kit). We first developed a classification scheme for types of detectable errors and validation alterations required to detect them. We then implemented the following validation relaxation techniques to enable data error conduct and detection: intentional redundancy, removal of “required” constraint, and illogical response combinations. This allowed for up to 11 identifiable errors to be made per survey. The error rate was defined as the total number of errors committed divided by the number of potential errors. We summarized crude error rates and estimated changes in error rates over time for both individuals and the entire program using logistic regression. Results The aggregate error rate was 1.60% (125/7817). Error rates did not differ significantly between enumerators (P=.51), but decreased for the cohort with increasing days of application use, from 2.3% at survey start (95% CI 1.8%-2.8%) to 0.6% at day 45 (95% CI 0.3%-0.9%; OR=0.969; P<.001). The highest error rate (84/618, 13.6%) occurred for an intentional redundancy question for a birthdate field, which was repeated in separate sections of the survey. We found low error rates (0.0% to 3.1%) for all other possible errors. Conclusions A strategy of removing validation rules on electronic data capture platforms can be used to create a set of detectable data errors, which can subsequently be used to assess group and individual enumerator error rates, their trends over time, and categories of data collection that require further training or additional quality control measures. This strategy may be particularly useful for identifying individual enumerators or systematic data errors that are responsive to enumerator training and is best applied to questions for which errors cannot be prevented through training or software design alone. Validation relaxation should be considered as a component of a holistic data quality assurance strategy. PMID:28821474
Validation Relaxation: A Quality Assurance Strategy for Electronic Data Collection.
Kenny, Avi; Gordon, Nicholas; Griffiths, Thomas; Kraemer, John D; Siedner, Mark J
2017-08-18
The use of mobile devices for data collection in developing world settings is becoming increasingly common and may offer advantages in data collection quality and efficiency relative to paper-based methods. However, mobile data collection systems can hamper many standard quality assurance techniques due to the lack of a hardcopy backup of data. Consequently, mobile health data collection platforms have the potential to generate datasets that appear valid, but are susceptible to unidentified database design flaws, areas of miscomprehension by enumerators, and data recording errors. We describe the design and evaluation of a strategy for estimating data error rates and assessing enumerator performance during electronic data collection, which we term "validation relaxation." Validation relaxation involves the intentional omission of data validation features for select questions to allow for data recording errors to be committed, detected, and monitored. We analyzed data collected during a cluster sample population survey in rural Liberia using an electronic data collection system (Open Data Kit). We first developed a classification scheme for types of detectable errors and validation alterations required to detect them. We then implemented the following validation relaxation techniques to enable data error conduct and detection: intentional redundancy, removal of "required" constraint, and illogical response combinations. This allowed for up to 11 identifiable errors to be made per survey. The error rate was defined as the total number of errors committed divided by the number of potential errors. We summarized crude error rates and estimated changes in error rates over time for both individuals and the entire program using logistic regression. The aggregate error rate was 1.60% (125/7817). Error rates did not differ significantly between enumerators (P=.51), but decreased for the cohort with increasing days of application use, from 2.3% at survey start (95% CI 1.8%-2.8%) to 0.6% at day 45 (95% CI 0.3%-0.9%; OR=0.969; P<.001). The highest error rate (84/618, 13.6%) occurred for an intentional redundancy question for a birthdate field, which was repeated in separate sections of the survey. We found low error rates (0.0% to 3.1%) for all other possible errors. A strategy of removing validation rules on electronic data capture platforms can be used to create a set of detectable data errors, which can subsequently be used to assess group and individual enumerator error rates, their trends over time, and categories of data collection that require further training or additional quality control measures. This strategy may be particularly useful for identifying individual enumerators or systematic data errors that are responsive to enumerator training and is best applied to questions for which errors cannot be prevented through training or software design alone. Validation relaxation should be considered as a component of a holistic data quality assurance strategy. ©Avi Kenny, Nicholas Gordon, Thomas Griffiths, John D Kraemer, Mark J Siedner. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 18.08.2017.
Similarity-based gene detection: using COGs to find evolutionarily-conserved ORFs.
Powell, Bradford C; Hutchison, Clyde A
2006-01-19
Experimental verification of gene products has not kept pace with the rapid growth of microbial sequence information. However, existing annotations of gene locations contain sufficient information to screen for probable errors. Furthermore, comparisons among genomes become more informative as more genomes are examined. We studied all open reading frames (ORFs) of at least 30 codons from the genomes of 27 sequenced bacterial strains. We grouped the potential peptide sequences encoded from the ORFs by forming Clusters of Orthologous Groups (COGs). We used this grouping in order to find homologous relationships that would not be distinguishable from noise when using simple BLAST searches. Although COG analysis was initially developed to group annotated genes, we applied it to the task of grouping anonymous DNA sequences that may encode proteins. "Mixed COGs" of ORFs (clusters in which some sequences correspond to annotated genes and some do not) are attractive targets when seeking errors of gene prediction. Examination of mixed COGs reveals some situations in which genes appear to have been missed in current annotations and a smaller number of regions that appear to have been annotated as gene loci erroneously. This technique can also be used to detect potential pseudogenes or sequencing errors. Our method uses an adjustable parameter for degree of conservation among the studied genomes (stringency). We detail results for one level of stringency at which we found 83 potential genes which had not previously been identified, 60 potential pseudogenes, and 7 sequences with existing gene annotations that are probably incorrect. Systematic study of sequence conservation offers a way to improve existing annotations by identifying potentially homologous regions where the annotation of the presence or absence of a gene is inconsistent among genomes.
Similarity-based gene detection: using COGs to find evolutionarily-conserved ORFs
Powell, Bradford C; Hutchison, Clyde A
2006-01-01
Background Experimental verification of gene products has not kept pace with the rapid growth of microbial sequence information. However, existing annotations of gene locations contain sufficient information to screen for probable errors. Furthermore, comparisons among genomes become more informative as more genomes are examined. We studied all open reading frames (ORFs) of at least 30 codons from the genomes of 27 sequenced bacterial strains. We grouped the potential peptide sequences encoded from the ORFs by forming Clusters of Orthologous Groups (COGs). We used this grouping in order to find homologous relationships that would not be distinguishable from noise when using simple BLAST searches. Although COG analysis was initially developed to group annotated genes, we applied it to the task of grouping anonymous DNA sequences that may encode proteins. Results "Mixed COGs" of ORFs (clusters in which some sequences correspond to annotated genes and some do not) are attractive targets when seeking errors of gene predicion. Examination of mixed COGs reveals some situations in which genes appear to have been missed in current annotations and a smaller number of regions that appear to have been annotated as gene loci erroneously. This technique can also be used to detect potential pseudogenes or sequencing errors. Our method uses an adjustable parameter for degree of conservation among the studied genomes (stringency). We detail results for one level of stringency at which we found 83 potential genes which had not previously been identified, 60 potential pseudogenes, and 7 sequences with existing gene annotations that are probably incorrect. Conclusion Systematic study of sequence conservation offers a way to improve existing annotations by identifying potentially homologous regions where the annotation of the presence or absence of a gene is inconsistent among genomes. PMID:16423288
Schroeder, Scott R; Salomon, Meghan M; Galanter, William L; Schiff, Gordon D; Vaida, Allen J; Gaunt, Michael J; Bryson, Michelle L; Rash, Christine; Falck, Suzanne; Lambert, Bruce L
2017-01-01
Background Drug name confusion is a common type of medication error and a persistent threat to patient safety. In the USA, roughly one per thousand prescriptions results in the wrong drug being filled, and most of these errors involve drug names that look or sound alike. Prior to approval, drug names undergo a variety of tests to assess their potential for confusability, but none of these preapproval tests has been shown to predict real-world error rates. Objectives We conducted a study to assess the association between error rates in laboratory-based tests of drug name memory and perception and real-world drug name confusion error rates. Methods Eighty participants, comprising doctors, nurses, pharmacists, technicians and lay people, completed a battery of laboratory tests assessing visual perception, auditory perception and short-term memory of look-alike and sound-alike drug name pairs (eg, hydroxyzine/hydralazine). Results Laboratory test error rates (and other metrics) significantly predicted real-world error rates obtained from a large, outpatient pharmacy chain, with the best-fitting model accounting for 37% of the variance in real-world error rates. Cross-validation analyses confirmed these results, showing that the laboratory tests also predicted errors from a second pharmacy chain, with 45% of the variance being explained by the laboratory test data. Conclusions Across two distinct pharmacy chains, there is a strong and significant association between drug name confusion error rates observed in the real world and those observed in laboratory-based tests of memory and perception. Regulators and drug companies seeking a validated preapproval method for identifying confusing drug names ought to consider using these simple tests. By using a standard battery of memory and perception tests, it should be possible to reduce the number of confusing look-alike and sound-alike drug name pairs that reach the market, which will help protect patients from potentially harmful medication errors. PMID:27193033
The cost of adherence mismeasurement in serious mental illness: a claims-based analysis.
Shafrin, Jason; Forma, Felicia; Scherer, Ethan; Hatch, Ainslie; Vytlacil, Edward; Lakdawalla, Darius
2017-05-01
To quantify how adherence mismeasurement affects the estimated impact of adherence on inpatient costs among patients with serious mental illness (SMI). Proportion of days covered (PDC) is a common claims-based measure of medication adherence. Because PDC does not measure medication ingestion, however, it may inaccurately measure adherence. We derived a formula to correct the bias that occurs in adherence-utilization studies resulting from errors in claims-based measures of adherence. We conducted a literature review to identify the correlation between gold-standard and claims-based adherence measures. We derived a bias-correction methodology to address claims-based medication adherence measurement error. We then applied this methodology to a case study of patients with SMI who initiated atypical antipsychotics in 2 large claims databases. Our literature review identified 6 studies of interest. The 4 most relevant ones measured correlations between 0.38 and 0.91. Our preferred estimate implies that the effect of adherence on inpatient spending estimated from claims data would understate the true effect by a factor of 5.3, if there were no other sources of bias. Although our procedure corrects for measurement error, such error also may amplify or mitigate other potential biases. For instance, if adherent patients are healthier than nonadherent ones, measurement error makes the resulting bias worse. On the other hand, if adherent patients are sicker, measurement error mitigates the other bias. Measurement error due to claims-based adherence measures is worth addressing, alongside other more widely emphasized sources of bias in inference.
Cracking the code: the accuracy of coding shoulder procedures and the repercussions.
Clement, N D; Murray, I R; Nie, Y X; McBirnie, J M
2013-05-01
Coding of patients' diagnosis and surgical procedures is subject to error levels of up to 40% with consequences on distribution of resources and financial recompense. Our aim was to explore and address reasons behind coding errors of shoulder diagnosis and surgical procedures and to evaluate a potential solution. A retrospective review of 100 patients who had undergone surgery was carried out. Coding errors were identified and the reasons explored. A coding proforma was designed to address these errors and was prospectively evaluated for 100 patients. The financial implications were also considered. Retrospective analysis revealed the correct primary diagnosis was assigned in 54 patients (54%) had an entirely correct diagnosis, and only 7 (7%) patients had a correct procedure code assigned. Coders identified indistinct clinical notes and poor clarity of procedure codes as reasons for errors. The proforma was significantly more likely to assign the correct diagnosis (odds ratio 18.2, p < 0.0001) and the correct procedure code (odds ratio 310.0, p < 0.0001). Using the proforma resulted in a £28,562 increase in revenue for the 100 patients evaluated relative to the income generated from the coding department. High error levels for coding are due to misinterpretation of notes and ambiguity of procedure codes. This can be addressed by allowing surgeons to assign the diagnosis and procedure using a simplified list that is passed directly to coding.
Medication errors: definitions and classification
Aronson, Jeffrey K
2009-01-01
To understand medication errors and to identify preventive strategies, we need to classify them and define the terms that describe them. The four main approaches to defining technical terms consider etymology, usage, previous definitions, and the Ramsey–Lewis method (based on an understanding of theory and practice). A medication error is ‘a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient’. Prescribing faults, a subset of medication errors, should be distinguished from prescription errors. A prescribing fault is ‘a failure in the prescribing [decision-making] process that leads to, or has the potential to lead to, harm to the patient’. The converse of this, ‘balanced prescribing’ is ‘the use of a medicine that is appropriate to the patient's condition and, within the limits created by the uncertainty that attends therapeutic decisions, in a dosage regimen that optimizes the balance of benefit to harm’. This excludes all forms of prescribing faults, such as irrational, inappropriate, and ineffective prescribing, underprescribing and overprescribing. A prescription error is ‘a failure in the prescription writing process that results in a wrong instruction about one or more of the normal features of a prescription’. The ‘normal features’ include the identity of the recipient, the identity of the drug, the formulation, dose, route, timing, frequency, and duration of administration. Medication errors can be classified, invoking psychological theory, as knowledge-based mistakes, rule-based mistakes, action-based slips, and memory-based lapses. This classification informs preventive strategies. PMID:19594526
Applying Intelligent Algorithms to Automate the Identification of Error Factors.
Jin, Haizhe; Qu, Qingxing; Munechika, Masahiko; Sano, Masataka; Kajihara, Chisato; Duffy, Vincent G; Chen, Han
2018-05-03
Medical errors are the manifestation of the defects occurring in medical processes. Extracting and identifying defects as medical error factors from these processes are an effective approach to prevent medical errors. However, it is a difficult and time-consuming task and requires an analyst with a professional medical background. The issues of identifying a method to extract medical error factors and reduce the extraction difficulty need to be resolved. In this research, a systematic methodology to extract and identify error factors in the medical administration process was proposed. The design of the error report, extraction of the error factors, and identification of the error factors were analyzed. Based on 624 medical error cases across four medical institutes in both Japan and China, 19 error-related items and their levels were extracted. After which, they were closely related to 12 error factors. The relational model between the error-related items and error factors was established based on a genetic algorithm (GA)-back-propagation neural network (BPNN) model. Additionally, compared to GA-BPNN, BPNN, partial least squares regression and support vector regression, GA-BPNN exhibited a higher overall prediction accuracy, being able to promptly identify the error factors from the error-related items. The combination of "error-related items, their different levels, and the GA-BPNN model" was proposed as an error-factor identification technology, which could automatically identify medical error factors.
Uncharted territory: measuring costs of diagnostic errors outside the medical record.
Schwartz, Alan; Weiner, Saul J; Weaver, Frances; Yudkowsky, Rachel; Sharma, Gunjan; Binns-Calvey, Amy; Preyss, Ben; Jordan, Neil
2012-11-01
In a past study using unannounced standardised patients (USPs), substantial rates of diagnostic and treatment errors were documented among internists. Because the authors know the correct disposition of these encounters and obtained the physicians' notes, they can identify necessary treatment that was not provided and unnecessary treatment. They can also discern which errors can be identified exclusively from a review of the medical records. To estimate the avoidable direct costs incurred by physicians making errors in our previous study. In the study, USPs visited 111 internal medicine attending physicians. They presented variants of four previously validated cases that jointly manipulate the presence or absence of contextual and biomedical factors that could lead to errors in management if overlooked. For example, in a patient with worsening asthma symptoms, a complicating biomedical factor was the presence of reflux disease and a complicating contextual factor was inability to afford the currently prescribed inhaler. Costs of missed or unnecessary services were computed using Medicare cost-based reimbursement data. Fourteen practice locations, including two academic clinics, two community-based primary care networks with multiple sites, a core safety net provider, and three Veteran Administration government facilities. Contribution of errors to costs of care. Overall, errors in care resulted in predicted costs of approximately $174,000 across 399 visits, of which only $8745 was discernible from a review of the medical records alone (without knowledge of the correct diagnoses). The median cost of error per visit with an incorrect care plan differed by case and by presentation variant within case. Chart reviews alone underestimate costs of care because they typically reflect appropriate treatment decisions conditional on (potentially erroneous) diagnoses. Important information about patient context is often entirely missing from medical records. Experimental methods, including the use of USPs, reveal the substantial costs of these errors.
Panesar, Sukhmeet S; Netuveli, Gopalakrishnan; Carson-Stevens, Andrew; Javad, Sundas; Patel, Bhavesh; Parry, Gareth; Donaldson, Liam J; Sheikh, Aziz
2013-11-21
The Orthopaedic Error Index for hospitals aims to provide the first national assessment of the relative safety of provision of orthopaedic surgery. Cross-sectional study (retrospective analysis of records in a database). The National Reporting and Learning System is the largest national repository of patient-safety incidents in the world with over eight million error reports. It offers a unique opportunity to develop novel approaches to enhancing patient safety, including investigating the relative safety of different healthcare providers and specialties. We extracted all orthopaedic error reports from the system over 1 year (2009-2010). The Orthopaedic Error Index was calculated as a sum of the error propensity and severity. All relevant hospitals offering orthopaedic surgery in England were then ranked by this metric to identify possible outliers that warrant further attention. 155 hospitals reported 48 971 orthopaedic-related patient-safety incidents. The mean Orthopaedic Error Index was 7.09/year (SD 2.72); five hospitals were identified as outliers. Three of these units were specialist tertiary hospitals carrying out complex surgery; the remaining two outlier hospitals had unusually high Orthopaedic Error Indexes: mean 14.46 (SD 0.29) and 15.29 (SD 0.51), respectively. The Orthopaedic Error Index has enabled identification of hospitals that may be putting patients at disproportionate risk of orthopaedic-related iatrogenic harm and which therefore warrant further investigation. It provides the prototype of a summary index of harm to enable surveillance of unsafe care over time across institutions. Further validation and scrutiny of the method will be required to assess its potential to be extended to other hospital specialties in the UK and also internationally to other health systems that have comparable national databases of patient-safety incidents.
The famous five factors in teamwork: a case study of fratricide.
Rafferty, Laura A; Stanton, Neville A; Walker, Guy H
2010-10-01
The purpose of this paper is to propose foundations for a theory of errors in teamwork based upon analysis of a case study of fratricide alongside a review of the existing literature. This approach may help to promote a better understanding of interactions within complex systems and help in the formulation of hypotheses and predictions concerning errors in teamwork, particularly incidents of fratricide. It is proposed that a fusion of concepts drawn from error models, with common causal categories taken from teamwork models, could allow for an in-depth exploration of incidents of fratricide. It is argued that such a model has the potential to explore the core causal categories identified as present in an incident of fratricide. This view marks fratricide as a process of errors occurring throughout the military system as a whole, particularly due to problems in teamwork within this complex system. Implications of this viewpoint for the development of a new theory of fratricide are offered. STATEMENT OF RELEVANCE: This article provides an insight into the fusion of existing error and teamwork models for the analysis of an incident of fratricide. Within this paper, a number of commonalities among models of teamwork have been identified allowing for the development of a model.
Mortensen, Jonathan M; Telis, Natalie; Hughey, Jacob J; Fan-Minogue, Hua; Van Auken, Kimberly; Dumontier, Michel; Musen, Mark A
2016-04-01
Biomedical ontologies contain errors. Crowdsourcing, defined as taking a job traditionally performed by a designated agent and outsourcing it to an undefined large group of people, provides scalable access to humans. Therefore, the crowd has the potential to overcome the limited accuracy and scalability found in current ontology quality assurance approaches. Crowd-based methods have identified errors in SNOMED CT, a large, clinical ontology, with an accuracy similar to that of experts, suggesting that crowdsourcing is indeed a feasible approach for identifying ontology errors. This work uses that same crowd-based methodology, as well as a panel of experts, to verify a subset of the Gene Ontology (200 relationships). Experts identified 16 errors, generally in relationships referencing acids and metals. The crowd performed poorly in identifying those errors, with an area under the receiver operating characteristic curve ranging from 0.44 to 0.73, depending on the methods configuration. However, when the crowd verified what experts considered to be easy relationships with useful definitions, they performed reasonably well. Notably, there are significantly fewer Google search results for Gene Ontology concepts than SNOMED CT concepts. This disparity may account for the difference in performance - fewer search results indicate a more difficult task for the worker. The number of Internet search results could serve as a method to assess which tasks are appropriate for the crowd. These results suggest that the crowd fits better as an expert assistant, helping experts with their verification by completing the easy tasks and allowing experts to focus on the difficult tasks, rather than an expert replacement. Copyright © 2016 Elsevier Inc. All rights reserved.
Experimental Errors in QSAR Modeling Sets: What We Can Do and What We Cannot Do.
Zhao, Linlin; Wang, Wenyi; Sedykh, Alexander; Zhu, Hao
2017-06-30
Numerous chemical data sets have become available for quantitative structure-activity relationship (QSAR) modeling studies. However, the quality of different data sources may be different based on the nature of experimental protocols. Therefore, potential experimental errors in the modeling sets may lead to the development of poor QSAR models and further affect the predictions of new compounds. In this study, we explored the relationship between the ratio of questionable data in the modeling sets, which was obtained by simulating experimental errors, and the QSAR modeling performance. To this end, we used eight data sets (four continuous endpoints and four categorical endpoints) that have been extensively curated both in-house and by our collaborators to create over 1800 various QSAR models. Each data set was duplicated to create several new modeling sets with different ratios of simulated experimental errors (i.e., randomizing the activities of part of the compounds) in the modeling process. A fivefold cross-validation process was used to evaluate the modeling performance, which deteriorates when the ratio of experimental errors increases. All of the resulting models were also used to predict external sets of new compounds, which were excluded at the beginning of the modeling process. The modeling results showed that the compounds with relatively large prediction errors in cross-validation processes are likely to be those with simulated experimental errors. However, after removing a certain number of compounds with large prediction errors in the cross-validation process, the external predictions of new compounds did not show improvement. Our conclusion is that the QSAR predictions, especially consensus predictions, can identify compounds with potential experimental errors. But removing those compounds by the cross-validation procedure is not a reasonable means to improve model predictivity due to overfitting.
Experimental Errors in QSAR Modeling Sets: What We Can Do and What We Cannot Do
2017-01-01
Numerous chemical data sets have become available for quantitative structure–activity relationship (QSAR) modeling studies. However, the quality of different data sources may be different based on the nature of experimental protocols. Therefore, potential experimental errors in the modeling sets may lead to the development of poor QSAR models and further affect the predictions of new compounds. In this study, we explored the relationship between the ratio of questionable data in the modeling sets, which was obtained by simulating experimental errors, and the QSAR modeling performance. To this end, we used eight data sets (four continuous endpoints and four categorical endpoints) that have been extensively curated both in-house and by our collaborators to create over 1800 various QSAR models. Each data set was duplicated to create several new modeling sets with different ratios of simulated experimental errors (i.e., randomizing the activities of part of the compounds) in the modeling process. A fivefold cross-validation process was used to evaluate the modeling performance, which deteriorates when the ratio of experimental errors increases. All of the resulting models were also used to predict external sets of new compounds, which were excluded at the beginning of the modeling process. The modeling results showed that the compounds with relatively large prediction errors in cross-validation processes are likely to be those with simulated experimental errors. However, after removing a certain number of compounds with large prediction errors in the cross-validation process, the external predictions of new compounds did not show improvement. Our conclusion is that the QSAR predictions, especially consensus predictions, can identify compounds with potential experimental errors. But removing those compounds by the cross-validation procedure is not a reasonable means to improve model predictivity due to overfitting. PMID:28691113
Errare machinale est: the use of error-related potentials in brain-machine interfaces
Chavarriaga, Ricardo; Sobolewski, Aleksander; Millán, José del R.
2014-01-01
The ability to recognize errors is crucial for efficient behavior. Numerous studies have identified electrophysiological correlates of error recognition in the human brain (error-related potentials, ErrPs). Consequently, it has been proposed to use these signals to improve human-computer interaction (HCI) or brain-machine interfacing (BMI). Here, we present a review of over a decade of developments toward this goal. This body of work provides consistent evidence that ErrPs can be successfully detected on a single-trial basis, and that they can be effectively used in both HCI and BMI applications. We first describe the ErrP phenomenon and follow up with an analysis of different strategies to increase the robustness of a system by incorporating single-trial ErrP recognition, either by correcting the machine's actions or by providing means for its error-based adaptation. These approaches can be applied both when the user employs traditional HCI input devices or in combination with another BMI channel. Finally, we discuss the current challenges that have to be overcome in order to fully integrate ErrPs into practical applications. This includes, in particular, the characterization of such signals during real(istic) applications, as well as the possibility of extracting richer information from them, going beyond the time-locked decoding that dominates current approaches. PMID:25100937
The cost of implementing inpatient bar code medication administration.
Sakowski, Julie Ann; Ketchel, Alan
2013-02-01
To calculate the costs associated with implementing and operating an inpatient bar-code medication administration (BCMA) system in the community hospital setting and to estimate the cost per harmful error prevented. This is a retrospective, observational study. Costs were calculated from the hospital perspective and a cost-consequence analysis was performed to estimate the cost per preventable adverse drug event averted. Costs were collected from financial records and key informant interviews at 4 not-for profit community hospitals. Costs included direct expenditures on capital, infrastructure, additional personnel, and the opportunity costs of time for existing personnel working on the project. The number of adverse drug events prevented using BCMA was estimated by multiplying the number of doses administered using BCMA by the rate of harmful errors prevented by interventions in response to system warnings. Our previous work found that BCMA identified and intercepted medication errors in 1.1% of doses administered, 9% of which potentially could have resulted in lasting harm. The cost of implementing and operating BCMA including electronic pharmacy management and drug repackaging over 5 years is $40,000 (range: $35,600 to $54,600) per BCMA-enabled bed and $2000 (range: $1800 to $2600) per harmful error prevented. BCMA can be an effective and potentially cost-saving tool for preventing the harm and costs associated with medication errors.
Errors in radiation oncology: A study in pathways and dosimetric impact
Drzymala, Robert E.; Purdy, James A.; Michalski, Jeff
2005-01-01
As complexity for treating patients increases, so does the risk of error. Some publications have suggested that record and verify (R&V) systems may contribute in propagating errors. Direct data transfer has the potential to eliminate most, but not all, errors. And although the dosimetric consequences may be obvious in some cases, a detailed study does not exist. In this effort, we examined potential errors in terms of scenarios, pathways of occurrence, and dosimetry. Our goal was to prioritize error prevention according to likelihood of event and dosimetric impact. For conventional photon treatments, we investigated errors of incorrect source‐to‐surface distance (SSD), energy, omitted wedge (physical, dynamic, or universal) or compensating filter, incorrect wedge or compensating filter orientation, improper rotational rate for arc therapy, and geometrical misses due to incorrect gantry, collimator or table angle, reversed field settings, and setup errors. For electron beam therapy, errors investigated included incorrect energy, incorrect SSD, along with geometric misses. For special procedures we examined errors for total body irradiation (TBI, incorrect field size, dose rate, treatment distance) and LINAC radiosurgery (incorrect collimation setting, incorrect rotational parameters). Likelihood of error was determined and subsequently rated according to our history of detecting such errors. Dosimetric evaluation was conducted by using dosimetric data, treatment plans, or measurements. We found geometric misses to have the highest error probability. They most often occurred due to improper setup via coordinate shift errors or incorrect field shaping. The dosimetric impact is unique for each case and depends on the proportion of fields in error and volume mistreated. These errors were short‐lived due to rapid detection via port films. The most significant dosimetric error was related to a reversed wedge direction. This may occur due to incorrect collimator angle or wedge orientation. For parallel‐opposed 60° wedge fields, this error could be as high as 80% to a point off‐axis. Other examples of dosimetric impact included the following: SSD, ~2%/cm for photons or electrons; photon energy (6 MV vs. 18 MV), on average 16% depending on depth, electron energy, ~0.5cm of depth coverage per MeV (mega‐electron volt). Of these examples, incorrect distances were most likely but rapidly detected by in vivo dosimetry. Errors were categorized by occurrence rate, methods and timing of detection, longevity, and dosimetric impact. Solutions were devised according to these criteria. To date, no one has studied the dosimetric impact of global errors in radiation oncology. Although there is heightened awareness that with increased use of ancillary devices and automation, there must be a parallel increase in quality check systems and processes, errors do and will continue to occur. This study has helped us identify and prioritize potential errors in our clinic according to frequency and dosimetric impact. For example, to reduce the use of an incorrect wedge direction, our clinic employs off‐axis in vivo dosimetry. To avoid a treatment distance setup error, we use both vertical table settings and optical distance indicator (ODI) values to properly set up fields. As R&V systems become more automated, more accurate and efficient data transfer will occur. This will require further analysis. Finally, we have begun examining potential intensity‐modulated radiation therapy (IMRT) errors according to the same criteria. PACS numbers: 87.53.Xd, 87.53.St PMID:16143793
Novel Myopia Genes and Pathways Identified From Syndromic Forms of Myopia
Loughman, James; Wildsoet, Christine F.; Williams, Cathy; Guggenheim, Jeremy A.
2018-01-01
Purpose To test the hypothesis that genes known to cause clinical syndromes featuring myopia also harbor polymorphisms contributing to nonsyndromic refractive errors. Methods Clinical phenotypes and syndromes that have refractive errors as a recognized feature were identified using the Online Mendelian Inheritance in Man (OMIM) database. One hundred fifty-four unique causative genes were identified, of which 119 were specifically linked with myopia and 114 represented syndromic myopia (i.e., myopia and at least one other clinical feature). Myopia was the only refractive error listed for 98 genes and hyperopia and the only refractive error noted for 28 genes, with the remaining 28 genes linked to phenotypes with multiple forms of refractive error. Pathway analysis was carried out to find biological processes overrepresented within these sets of genes. Genetic variants located within 50 kb of the 119 myopia-related genes were evaluated for involvement in refractive error by analysis of summary statistics from genome-wide association studies (GWAS) conducted by the CREAM Consortium and 23andMe, using both single-marker and gene-based tests. Results Pathway analysis identified several biological processes already implicated in refractive error development through prior GWAS analyses and animal studies, including extracellular matrix remodeling, focal adhesion, and axon guidance, supporting the research hypothesis. Novel pathways also implicated in myopia development included mannosylation, glycosylation, lens development, gliogenesis, and Schwann cell differentiation. Hyperopia was found to be linked to a different pattern of biological processes, mostly related to organogenesis. Comparison with GWAS findings further confirmed that syndromic myopia genes were enriched for genetic variants that influence refractive errors in the general population. Gene-based analyses implicated 21 novel candidate myopia genes (ADAMTS18, ADAMTS2, ADAMTSL4, AGK, ALDH18A1, ASXL1, COL4A1, COL9A2, ERBB3, FBN1, GJA1, GNPTG, IFIH1, KIF11, LTBP2, OCA2, POLR3B, POMT1, PTPN11, TFAP2A, ZNF469). Conclusions Common genetic variants within or nearby genes that cause syndromic myopia are enriched for variants that cause nonsyndromic, common myopia. Analysis of syndromic forms of refractive errors can provide new insights into the etiology of myopia and additional potential targets for therapeutic interventions. PMID:29346494
Mayo-Wilson, Evan; Ng, Sueko Matsumura; Chuck, Roy S; Li, Tianjing
2017-09-05
Systematic reviews should inform American Academy of Ophthalmology (AAO) Preferred Practice Pattern® (PPP) guidelines. The quality of systematic reviews related to the forthcoming Preferred Practice Pattern® guideline (PPP) Refractive Errors & Refractive Surgery is unknown. We sought to identify reliable systematic reviews to assist the AAO Refractive Errors & Refractive Surgery PPP. Systematic reviews were eligible if they evaluated the effectiveness or safety of interventions included in the 2012 PPP Refractive Errors & Refractive Surgery. To identify potentially eligible systematic reviews, we searched the Cochrane Eyes and Vision United States Satellite database of systematic reviews. Two authors identified eligible reviews and abstracted information about the characteristics and quality of the reviews independently using the Systematic Review Data Repository. We classified systematic reviews as "reliable" when they (1) defined criteria for the selection of studies, (2) conducted comprehensive literature searches for eligible studies, (3) assessed the methodological quality (risk of bias) of the included studies, (4) used appropriate methods for meta-analyses (which we assessed only when meta-analyses were reported), (5) presented conclusions that were supported by the evidence provided in the review. We identified 124 systematic reviews related to refractive error; 39 met our eligibility criteria, of which we classified 11 to be reliable. Systematic reviews classified as unreliable did not define the criteria for selecting studies (5; 13%), did not assess methodological rigor (10; 26%), did not conduct comprehensive searches (17; 44%), or used inappropriate quantitative methods (3; 8%). The 11 reliable reviews were published between 2002 and 2016. They included 0 to 23 studies (median = 9) and analyzed 0 to 4696 participants (median = 666). Seven reliable reviews (64%) assessed surgical interventions. Most systematic reviews of interventions for refractive error are low methodological quality. Following widely accepted guidance, such as Cochrane or Institute of Medicine standards for conducting systematic reviews, would contribute to improved patient care and inform future research.
Awareness of technology-induced errors and processes for identifying and preventing such errors.
Bellwood, Paule; Borycki, Elizabeth M; Kushniruk, Andre W
2015-01-01
There is a need to determine if organizations working with health information technology are aware of technology-induced errors and how they are addressing and preventing them. The purpose of this study was to: a) determine the degree of technology-induced error awareness in various Canadian healthcare organizations, and b) identify those processes and procedures that are currently in place to help address, manage, and prevent technology-induced errors. We identified a lack of technology-induced error awareness among participants. Participants identified there was a lack of well-defined procedures in place for reporting technology-induced errors, addressing them when they arise, and preventing them.
Tailoring a Human Reliability Analysis to Your Industry Needs
NASA Technical Reports Server (NTRS)
DeMott, D. L.
2016-01-01
Companies at risk of accidents caused by human error that result in catastrophic consequences include: airline industry mishaps, medical malpractice, medication mistakes, aerospace failures, major oil spills, transportation mishaps, power production failures and manufacturing facility incidents. Human Reliability Assessment (HRA) is used to analyze the inherent risk of human behavior or actions introducing errors into the operation of a system or process. These assessments can be used to identify where errors are most likely to arise and the potential risks involved if they do occur. Using the basic concepts of HRA, an evolving group of methodologies are used to meet various industry needs. Determining which methodology or combination of techniques will provide a quality human reliability assessment is a key element to developing effective strategies for understanding and dealing with risks caused by human errors. There are a number of concerns and difficulties in "tailoring" a Human Reliability Assessment (HRA) for different industries. Although a variety of HRA methodologies are available to analyze human error events, determining the most appropriate tools to provide the most useful results can depend on industry specific cultures and requirements. Methodology selection may be based on a variety of factors that include: 1) how people act and react in different industries, 2) expectations based on industry standards, 3) factors that influence how the human errors could occur such as tasks, tools, environment, workplace, support, training and procedure, 4) type and availability of data, 5) how the industry views risk & reliability, and 6) types of emergencies, contingencies and routine tasks. Other considerations for methodology selection should be based on what information is needed from the assessment. If the principal concern is determination of the primary risk factors contributing to the potential human error, a more detailed analysis method may be employed versus a requirement to provide a numerical value as part of a probabilistic risk assessment. Industries involved with humans operating large equipment or transport systems (ex. railroads or airlines) would have more need to address the man machine interface than medical workers administering medications. Human error occurs in every industry; in most cases the consequences are relatively benign and occasionally beneficial. In cases where the results can have disastrous consequences, the use of Human Reliability techniques to identify and classify the risk of human errors allows a company more opportunities to mitigate or eliminate these types of risks and prevent costly tragedies.
Quality Assurance of NCI Thesaurus by Mining Structural-Lexical Patterns
Abeysinghe, Rashmie; Brooks, Michael A.; Talbert, Jeffery; Licong, Cui
2017-01-01
Quality assurance of biomedical terminologies such as the National Cancer Institute (NCI) Thesaurus is an essential part of the terminology management lifecycle. We investigate a structural-lexical approach based on non-lattice subgraphs to automatically identify missing hierarchical relations and missing concepts in the NCI Thesaurus. We mine six structural-lexical patterns exhibiting in non-lattice subgraphs: containment, union, intersection, union-intersection, inference-contradiction, and inference union. Each pattern indicates a potential specific type of error and suggests a potential type of remediation. We found 809 non-lattice subgraphs with these patterns in the NCI Thesaurus (version 16.12d). Domain experts evaluated a random sample of 50 small non-lattice subgraphs, of which 33 were confirmed to contain errors and make correct suggestions (33/50 = 66%). Of the 25 evaluated subgraphs revealing multiple patterns, 22 were verified correct (22/25 = 88%). This shows the effectiveness of our structurallexical-pattern-based approach in detecting errors and suggesting remediations in the NCI Thesaurus. PMID:29854100
Terkola, R; Czejka, M; Bérubé, J
2017-08-01
Medication errors are a significant cause of morbidity and mortality especially with antineoplastic drugs, owing to their narrow therapeutic index. Gravimetric workflow software systems have the potential to reduce volumetric errors during intravenous antineoplastic drug preparation which may occur when verification is reliant on visual inspection. Our aim was to detect medication errors with possible critical therapeutic impact as determined by the rate of prevented medication errors in chemotherapy compounding after implementation of gravimetric measurement. A large-scale, retrospective analysis of data was carried out, related to medication errors identified during preparation of antineoplastic drugs in 10 pharmacy services ("centres") in five European countries following the introduction of an intravenous workflow software gravimetric system. Errors were defined as errors in dose volumes outside tolerance levels, identified during weighing stages of preparation of chemotherapy solutions which would not otherwise have been detected by conventional visual inspection. The gravimetric system detected that 7.89% of the 759 060 doses of antineoplastic drugs prepared at participating centres between July 2011 and October 2015 had error levels outside the accepted tolerance range set by individual centres, and prevented these doses from reaching patients. The proportion of antineoplastic preparations with deviations >10% ranged from 0.49% to 5.04% across sites, with a mean of 2.25%. The proportion of preparations with deviations >20% ranged from 0.21% to 1.27% across sites, with a mean of 0.71%. There was considerable variation in error levels for different antineoplastic agents. Introduction of a gravimetric preparation system for antineoplastic agents detected and prevented dosing errors which would not have been recognized with traditional methods and could have resulted in toxicity or suboptimal therapeutic outcomes for patients undergoing anticancer treatment. © 2017 The Authors. Journal of Clinical Pharmacy and Therapeutics Published by John Wiley & Sons Ltd.
Identifying Novice Student Programming Misconceptions and Errors from Summative Assessments
ERIC Educational Resources Information Center
Veerasamy, Ashok Kumar; D'Souza, Daryl; Laakso, Mikko-Jussi
2016-01-01
This article presents a study aimed at examining the novice student answers in an introductory programming final e-exam to identify misconceptions and types of errors. Our study used the Delphi concept inventory to identify student misconceptions and skill, rule, and knowledge-based errors approach to identify the types of errors made by novices…
Nurses' behaviors and visual scanning patterns may reduce patient identification errors.
Marquard, Jenna L; Henneman, Philip L; He, Ze; Jo, Junghee; Fisher, Donald L; Henneman, Elizabeth A
2011-09-01
Patient identification (ID) errors occurring during the medication administration process can be fatal. The aim of this study is to determine whether differences in nurses' behaviors and visual scanning patterns during the medication administration process influence their capacities to identify patient ID errors. Nurse participants (n = 20) administered medications to 3 patients in a simulated clinical setting, with 1 patient having an embedded ID error. Error-identifying nurses tended to complete more process steps in a similar amount of time than non-error-identifying nurses and tended to scan information across artifacts (e.g., ID band, patient chart, medication label) rather than fixating on several pieces of information on a single artifact before fixating on another artifact. Non-error-indentifying nurses tended to increase their durations of off-topic conversations-a type of process interruption-over the course of the trials; the difference between groups was significant in the trial with the embedded ID error. Error-identifying nurses tended to have their most fixations in a row on the patient's chart, whereas non-error-identifying nurses did not tend to have a single artifact on which they consistently fixated. Finally, error-identifying nurses tended to have predictable eye fixation sequences across artifacts, whereas non-error-identifying nurses tended to have seemingly random eye fixation sequences. This finding has implications for nurse training and the design of tools and technologies that support nurses as they complete the medication administration process. (c) 2011 APA, all rights reserved.
Yousefinezhadi, Taraneh; Jannesar Nobari, Farnaz Attar; Goodari, Faranak Behzadi; Arab, Mohammad
2016-01-01
Introduction: In any complex human system, human error is inevitable and shows that can’t be eliminated by blaming wrong doers. So with the aim of improving Intensive Care Units (ICU) reliability in hospitals, this research tries to identify and analyze ICU’s process failure modes at the point of systematic approach to errors. Methods: In this descriptive research, data was gathered qualitatively by observations, document reviews, and Focus Group Discussions (FGDs) with the process owners in two selected ICUs in Tehran in 2014. But, data analysis was quantitative, based on failures’ Risk Priority Number (RPN) at the base of Failure Modes and Effects Analysis (FMEA) method used. Besides, some causes of failures were analyzed by qualitative Eindhoven Classification Model (ECM). Results: Through FMEA methodology, 378 potential failure modes from 180 ICU activities in hospital A and 184 potential failures from 99 ICU activities in hospital B were identified and evaluated. Then with 90% reliability (RPN≥100), totally 18 failures in hospital A and 42 ones in hospital B were identified as non-acceptable risks and then their causes were analyzed by ECM. Conclusions: Applying of modified PFMEA for improving two selected ICUs’ processes reliability in two different kinds of hospitals shows that this method empowers staff to identify, evaluate, prioritize and analyze all potential failure modes and also make them eager to identify their causes, recommend corrective actions and even participate in improving process without feeling blamed by top management. Moreover, by combining FMEA and ECM, team members can easily identify failure causes at the point of health care perspectives. PMID:27157162
Runtime Verification in Context : Can Optimizing Error Detection Improve Fault Diagnosis
NASA Technical Reports Server (NTRS)
Dwyer, Matthew B.; Purandare, Rahul; Person, Suzette
2010-01-01
Runtime verification has primarily been developed and evaluated as a means of enriching the software testing process. While many researchers have pointed to its potential applicability in online approaches to software fault tolerance, there has been a dearth of work exploring the details of how that might be accomplished. In this paper, we describe how a component-oriented approach to software health management exposes the connections between program execution, error detection, fault diagnosis, and recovery. We identify both research challenges and opportunities in exploiting those connections. Specifically, we describe how recent approaches to reducing the overhead of runtime monitoring aimed at error detection might be adapted to reduce the overhead and improve the effectiveness of fault diagnosis.
ERP correlates of error processing during performance on the Halstead Category Test.
Santos, I M; Teixeira, A R; Tomé, A M; Pereira, A T; Rodrigues, P; Vagos, P; Costa, J; Carrito, M L; Oliveira, B; DeFilippis, N A; Silva, C F
2016-08-01
The Halstead Category Test (HCT) is a neuropsychological test that measures a person's ability to formulate and apply abstract principles. Performance must be adjusted based on feedback after each trial and errors are common until the underlying rules are discovered. Event-related potential (ERP) studies associated with the HCT are lacking. This paper demonstrates the use of a methodology inspired on Singular Spectrum Analysis (SSA) applied to EEG signals, to remove high amplitude ocular and movement artifacts during performance on the test. This filtering technique introduces no phase or latency distortions, with minimum loss of relevant EEG information. Importantly, the test was applied in its original clinical format, without introducing adaptations to ERP recordings. After signal treatment, the feedback-related negativity (FRN) wave, which is related to error-processing, was identified. This component peaked around 250ms, after feedback, in fronto-central electrodes. As expected, errors elicited more negative amplitudes than correct responses. Results are discussed in terms of the increased clinical potential that coupling ERP information with behavioral performance data can bring to the specificity of the HCT in diagnosing different types of impairment in frontal brain function. Copyright © 2016. Published by Elsevier B.V.
Using failure mode and effects analysis to improve the safety of neonatal parenteral nutrition.
Arenas Villafranca, Jose Javier; Gómez Sánchez, Araceli; Nieto Guindo, Miriam; Faus Felipe, Vicente
2014-07-15
Failure mode and effects analysis (FMEA) was used to identify potential errors and to enable the implementation of measures to improve the safety of neonatal parenteral nutrition (PN). FMEA was used to analyze the preparation and dispensing of neonatal PN from the perspective of the pharmacy service in a general hospital. A process diagram was drafted, illustrating the different phases of the neonatal PN process. Next, the failures that could occur in each of these phases were compiled and cataloged, and a questionnaire was developed in which respondents were asked to rate the following aspects of each error: incidence, detectability, and severity. The highest scoring failures were considered high risk and identified as priority areas for improvements to be made. The evaluation process detected a total of 82 possible failures. Among the phases with the highest number of possible errors were transcription of the medical order, formulation of the PN, and preparation of material for the formulation. After the classification of these 82 possible failures and of their relative importance, a checklist was developed to achieve greater control in the error-detection process. FMEA demonstrated that use of the checklist reduced the level of risk and improved the detectability of errors. FMEA was useful for detecting medication errors in the PN preparation process and enabling corrective measures to be taken. A checklist was developed to reduce errors in the most critical aspects of the process. Copyright © 2014 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
National trends in safety performance of electronic health record systems in children's hospitals.
Chaparro, Juan D; Classen, David C; Danforth, Melissa; Stockwell, David C; Longhurst, Christopher A
2017-03-01
To evaluate the safety of computerized physician order entry (CPOE) and associated clinical decision support (CDS) systems in electronic health record (EHR) systems at pediatric inpatient facilities in the US using the Leapfrog Group's pediatric CPOE evaluation tool. The Leapfrog pediatric CPOE evaluation tool, a previously validated tool to assess the ability of a CPOE system to identify orders that could potentially lead to patient harm, was used to evaluate 41 pediatric hospitals over a 2-year period. Evaluation of the last available test for each institution was performed, assessing performance overall as well as by decision support category (eg, drug-drug, dosing limits). Longitudinal analysis of test performance was also carried out to assess the impact of testing and the overall trend of CPOE performance in pediatric hospitals. Pediatric CPOE systems were able to identify 62% of potential medication errors in the test scenarios, but ranged widely from 23-91% in the institutions tested. The highest scoring categories included drug-allergy interactions, dosing limits (both daily and cumulative), and inappropriate routes of administration. We found that hospitals with longer periods since their CPOE implementation did not have better scores upon initial testing, but after initial testing there was a consistent improvement in testing scores of 4 percentage points per year. Pediatric computerized physician order entry (CPOE) systems on average are able to intercept a majority of potential medication errors, but vary widely among implementations. Prospective and repeated testing using the Leapfrog Group's evaluation tool is associated with improved ability to intercept potential medication errors. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Bourne, Richard S; Shulman, Rob; Tomlin, Mark; Borthwick, Mark; Berry, Will; Mills, Gary H
2017-04-01
To identify between and within profession-rater reliability of clinical impact grading for common critical care prescribing error and optimisation cases. To identify representative clinical impact grades for each individual case. Electronic questionnaire. 5 UK NHS Trusts. 30 Critical care healthcare professionals (doctors, pharmacists and nurses). Participants graded severity of clinical impact (5-point categorical scale) of 50 error and 55 optimisation cases. Case between and within profession-rater reliability and modal clinical impact grading. Between and within profession rater reliability analysis used linear mixed model and intraclass correlation, respectively. The majority of error and optimisation cases (both 76%) had a modal clinical severity grade of moderate or higher. Error cases: doctors graded clinical impact significantly lower than pharmacists (-0.25; P < 0.001) and nurses (-0.53; P < 0.001), with nurses significantly higher than pharmacists (0.28; P < 0.001). Optimisation cases: doctors graded clinical impact significantly lower than nurses and pharmacists (-0.39 and -0.5; P < 0.001, respectively). Within profession reliability grading was excellent for pharmacists (0.88 and 0.89; P < 0.001) and doctors (0.79 and 0.83; P < 0.001) but only fair to good for nurses (0.43 and 0.74; P < 0.001), for optimisation and error cases, respectively. Representative clinical impact grades for over 100 common prescribing error and optimisation cases are reported for potential clinical practice and research application. The between professional variability highlights the importance of multidisciplinary perspectives in assessment of medication error and optimisation cases in clinical practice and research. © The Author 2017. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Sirriyeh, Reema; Lawton, Rebecca; Gardner, Peter; Armitage, Gerry
2010-12-01
Previous research has established health professionals as secondary victims of medical error, with the identification of a range of emotional and psychological repercussions that may occur as a result of involvement in error.2 3 Due to the vast range of emotional and psychological outcomes, research to date has been inconsistent in the variables measured and tools used. Therefore, differing conclusions have been drawn as to the nature of the impact of error on professionals and the subsequent repercussions for their team, patients and healthcare institution. A systematic review was conducted. Data sources were identified using database searches, with additional reference and hand searching. Eligibility criteria were applied to all studies identified, resulting in a total of 24 included studies. Quality assessment was conducted with the included studies using a tool that was developed as part of this research, but due to the limited number and diverse nature of studies, no exclusions were made on this basis. Review findings suggest that there is consistent evidence for the widespread impact of medical error on health professionals. Psychological repercussions may include negative states such as shame, self-doubt, anxiety and guilt. Despite much attention devoted to the assessment of negative outcomes, the potential for positive outcomes resulting from error also became apparent, with increased assertiveness, confidence and improved colleague relationships reported. It is evident that involvement in a medical error can elicit a significant psychological response from the health professional involved. However, a lack of literature around coping and support, coupled with inconsistencies and weaknesses in methodology, may need be addressed in future work.
Caveat emptor: Erroneous safety information about opioids in online drug-information compendia.
Talwar, Sonia R; Randhawa, Amarita S; Dankiewicz, Erica H; Crudele, Nancy T; Haddox, J David
2016-01-01
Healthcare professionals and consumers refer to online drug-information compendia (eg, Epocrates and WebMD) to learn about prescription medications, including opioid analgesics. With the significant risks associated with opioids, including abuse, misuse, and addiction, any of which can result in life-threatening overdose, it is important for those seeking information from online compendia to have access to current, accurate, and complete drug information to help support clinical treatment decisions. Although compendia are informative, readily available, and user friendly, studies have shown that they may contain errors. To review and identify misinformation in drug summaries of online drug-information compendia for selected opioid analgesic products and submit content corrections to the respective editors. Between 2011 and 2013, drug summaries for Purdue's prescription opioid analgesic products from seven leading online drug-information compendia were systematically reviewed, and the requests for corrections were retrospectively categorized and classified. At least 2 months following requests, the same compendia were then reexamined to assess the degree of error resolution. A total of 859 errors were identified, with the greatest percentage in Safety and Patient Education categories. Across the seven compendia, the complete or partial resolution of errors was 34 percent; therefore, nearly two thirds of the identified errors remain. The results of this analysis, consistent with past studies, demonstrate that online drug-information compendia may contain inaccurate information. Healthcare professionals and consumers must be informed of potential misinformation so they may consider using multiple resources to obtain accurate and current drug information, thereby helping to ensure safer use of prescription medications, such as opioids.
Staubach, Maria
2009-09-01
This study aims to identify factors which influence and cause errors in traffic accidents and to use these as a basis for information to guide the application and design of driver assistance systems. A total of 474 accidents were examined in depth for this study by means of a psychological survey, data from accident reports, and technical reconstruction information. An error analysis was subsequently carried out, taking into account the driver, environment, and vehicle sub-systems. Results showed that all accidents were influenced by errors as a consequence of distraction and reduced activity. For crossroad accidents, there were further errors resulting from sight obstruction, masked stimuli, focus errors, and law infringements. Lane departure crashes were additionally caused by errors as a result of masked stimuli, law infringements, expectation errors as well as objective and action slips, while same direction accidents occurred additionally because of focus errors, expectation errors, and objective and action slips. Most accidents were influenced by multiple factors. There is a safety potential for Advanced Driver Assistance Systems (ADAS), which support the driver in information assimilation and help to avoid distraction and reduced activity. The design of the ADAS is dependent on the specific influencing factors of the accident type.
Follow-up of negative MRI-targeted prostate biopsies: when are we missing cancer?
Gold, Samuel A; Hale, Graham R; Bloom, Jonathan B; Smith, Clayton P; Rayn, Kareem N; Valera, Vladimir; Wood, Bradford J; Choyke, Peter L; Turkbey, Baris; Pinto, Peter A
2018-05-21
Multiparametric magnetic resonance imaging (mpMRI) has improved clinicians' ability to detect clinically significant prostate cancer (csPCa). Combining or fusing these images with the real-time imaging of transrectal ultrasound (TRUS) allows urologists to better sample lesions with a targeted biopsy (Tbx) leading to the detection of greater rates of csPCa and decreased rates of low-risk PCa. In this review, we evaluate the technical aspects of the mpMRI-guided Tbx procedure to identify possible sources of error and provide clinical context to a negative Tbx. A literature search was conducted of possible reasons for false-negative TBx. This includes discussion on false-positive mpMRI findings, termed "PCa mimics," that may incorrectly suggest high likelihood of csPCa as well as errors during Tbx resulting in inexact image fusion or biopsy needle placement. Despite the strong negative predictive value associated with Tbx, concerns of missed disease often remain, especially with MR-visible lesions. This raises questions about what to do next after a negative Tbx result. Potential sources of error can arise from each step in the targeted biopsy process ranging from "PCa mimics" or technical errors during mpMRI acquisition to failure to properly register MRI and TRUS images on a fusion biopsy platform to technical or anatomic limits on needle placement accuracy. A better understanding of these potential pitfalls in the mpMRI-guided Tbx procedure will aid interpretation of a negative Tbx, identify areas for improving technical proficiency, and improve both physician understanding of negative Tbx and patient-management options.
Quality Issues of Court Reporters and Transcriptionists for Qualitative Research
Hennink, Monique; Weber, Mary Beth
2015-01-01
Transcription is central to qualitative research, yet few researchers identify the quality of different transcription methods. We described the quality of verbatim transcripts from traditional transcriptionists and court reporters by reviewing 16 transcripts from 8 focus group discussions using four criteria: transcription errors, cost and time of transcription, and effect on study participants. Transcriptionists made fewer errors, captured colloquial dialogue, and errors were largely influenced by the quality of the recording. Court reporters made more errors, particularly in the omission of topical content and contextual detail and were less able to produce a verbatim transcript; however the potential immediacy of the transcript was advantageous. In terms of cost, shorter group discussions favored a transcriptionist and longer groups a court reporter. Study participants reported no effect by either method of recording. Understanding the benefits and limitations of each method of transcription can help researchers select an appropriate method for each study. PMID:23512435
#2 - An Empirical Assessment of Exposure Measurement Error ...
Background• Differing degrees of exposure error acrosspollutants• Previous focus on quantifying and accounting forexposure error in single-pollutant models• Examine exposure errors for multiple pollutantsand provide insights on the potential for bias andattenuation of effect estimates in single and bipollutantepidemiological models The National Exposure Research Laboratory (NERL) Human Exposure and Atmospheric Sciences Division (HEASD) conducts research in support of EPA mission to protect human health and the environment. HEASD research program supports Goal 1 (Clean Air) and Goal 4 (Healthy People) of EPA strategic plan. More specifically, our division conducts research to characterize the movement of pollutants from the source to contact with humans. Our multidisciplinary research program produces Methods, Measurements, and Models to identify relationships between and characterize processes that link source emissions, environmental concentrations, human exposures, and target-tissue dose. The impact of these tools is improved regulatory programs and policies for EPA.
Effect of electrical coupling on ionic current and synaptic potential measurements.
Rabbah, Pascale; Golowasch, Jorge; Nadim, Farzan
2005-07-01
Recent studies have found electrical coupling to be more ubiquitous than previously thought, and coupling through gap junctions is known to play a crucial role in neuronal function and network output. In particular, current spread through gap junctions may affect the activation of voltage-dependent conductances as well as chemical synaptic release. Using voltage-clamp recordings of two strongly electrically coupled neurons of the lobster stomatogastric ganglion and conductance-based models of these neurons, we identified effects of electrical coupling on the measurement of leak and voltage-gated outward currents, as well as synaptic potentials. Experimental measurements showed that both leak and voltage-gated outward currents are recruited by gap junctions from neurons coupled to the clamped cell. Nevertheless, in spite of the strong coupling between these neurons, the errors made in estimating voltage-gated conductance parameters were relatively minor (<10%). Thus in many cases isolation of coupled neurons may not be required if a small degree of measurement error of the voltage-gated currents or the synaptic potentials is acceptable. Modeling results show, however, that such errors may be as high as 20% if the gap-junction position is near the recording site or as high as 90% when measuring smaller voltage-gated ionic currents. Paradoxically, improved space clamp increases the errors arising from electrical coupling because voltage control across gap junctions is poor for even the highest realistic coupling conductances. Furthermore, the common procedure of leak subtraction can add an extra error to the conductance measurement, the sign of which depends on the maximal conductance.
The Dipole Segment Model for Axisymmetrical Elongated Asteroids
NASA Astrophysics Data System (ADS)
Zeng, Xiangyuan; Zhang, Yonglong; Yu, Yang; Liu, Xiangdong
2018-02-01
Various simplified models have been investigated as a way to understand the complex dynamical environment near irregular asteroids. A dipole segment model is explored in this paper, one that is composed of a massive straight segment and two point masses at the extremities of the segment. Given an explicitly simple form of the potential function that is associated with the dipole segment model, five topological cases are identified with different sets of system parameters. Locations, stabilities, and variation trends of the system equilibrium points are investigated in a parametric way. The exterior potential distribution of nearly axisymmetrical elongated asteroids is approximated by minimizing the acceleration error in a test zone. The acceleration error minimization process determines the parameters of the dipole segment. The near-Earth asteroid (8567) 1996 HW1 is chosen as an example to evaluate the effectiveness of the approximation method for the exterior potential distribution. The advantages of the dipole segment model over the classical dipole and the traditional segment are also discussed. Percent error of acceleration and the degree of approximation are illustrated by using the dipole segment model to approximate four more asteroids. The high efficiency of the simplified model over the polyhedron is clearly demonstrated by comparing the CPU time.
NASA Technical Reports Server (NTRS)
Clement, W. F.; Allen, R. W.; Heffley, R. K.; Jewell, W. F.; Jex, H. R.; Mcruer, D. T.; Schulman, T. M.; Stapleford, R. L.
1980-01-01
The NASA Ames Research Center proposed a man-vehicle systems research facility to support flight simulation studies which are needed for identifying and correcting the sources of human error associated with current and future air carrier operations. The organization of research facility is reviewed and functional requirements and related priorities for the facility are recommended based on a review of potentially critical operational scenarios. Requirements are included for the experimenter's simulation control and data acquisition functions, as well as for the visual field, motion, sound, computation, crew station, and intercommunications subsystems. The related issues of functional fidelity and level of simulation are addressed, and specific criteria for quantitative assessment of various aspects of fidelity are offered. Recommendations for facility integration, checkout, and staffing are included.
Bennett, Derrick A; Landry, Denise; Little, Julian; Minelli, Cosetta
2017-09-19
Several statistical approaches have been proposed to assess and correct for exposure measurement error. We aimed to provide a critical overview of the most common approaches used in nutritional epidemiology. MEDLINE, EMBASE, BIOSIS and CINAHL were searched for reports published in English up to May 2016 in order to ascertain studies that described methods aimed to quantify and/or correct for measurement error for a continuous exposure in nutritional epidemiology using a calibration study. We identified 126 studies, 43 of which described statistical methods and 83 that applied any of these methods to a real dataset. The statistical approaches in the eligible studies were grouped into: a) approaches to quantify the relationship between different dietary assessment instruments and "true intake", which were mostly based on correlation analysis and the method of triads; b) approaches to adjust point and interval estimates of diet-disease associations for measurement error, mostly based on regression calibration analysis and its extensions. Two approaches (multiple imputation and moment reconstruction) were identified that can deal with differential measurement error. For regression calibration, the most common approach to correct for measurement error used in nutritional epidemiology, it is crucial to ensure that its assumptions and requirements are fully met. Analyses that investigate the impact of departures from the classical measurement error model on regression calibration estimates can be helpful to researchers in interpreting their findings. With regard to the possible use of alternative methods when regression calibration is not appropriate, the choice of method should depend on the measurement error model assumed, the availability of suitable calibration study data and the potential for bias due to violation of the classical measurement error model assumptions. On the basis of this review, we provide some practical advice for the use of methods to assess and adjust for measurement error in nutritional epidemiology.
A pilot study of the safety implications of Australian nurses' sleep and work hours.
Dorrian, Jillian; Lamond, Nicole; van den Heuvel, Cameron; Pincombe, Jan; Rogers, Ann E; Dawson, Drew
2006-01-01
The frequency and severity of adverse events in Australian healthcare is under increasing scrutiny. A recent state government report identified 31 events involving "death or serious [patient] harm" and 452 "very high risk" incidents. Australia-wide, a previous study identified 2,324 adverse medical events (AME) in a single year, with more than half considered preventable. Despite the recognized link between fatigue and error in other industries, to date, few studies of medical errors have assessed the fatigue of the healthcare professionals involved. Nurses work extended and unpredictable hours with a lack of regular breaks and are therefore likely to experience elevated fatigue. Currently, there is very little available information on Australian nurses' sleep or fatigue levels, nor is there any information about whether this affects their performance. This study therefore aims to examine work hours, sleep, fatigue and error occurrence in Australian nurses. Using logbooks, 23 full-time nurses in a metropolitan hospital completed daily recordings for one month (644 days, 377 shifts) of their scheduled and actual work hours, sleep length and quality, sleepiness, and fatigue levels. Frequency and type of nursing errors, near errors, and observed errors (made by others) were recorded. Nurses reported struggling to remain awake during 36% of shifts. Moderate to high levels of stress, physical exhaustion, and mental exhaustion were reported on 23%, 40%, and 36% of shifts, respectively. Extreme drowsiness while driving or cycling home was reported on 45 occasions (11.5%), with three reports of near accidents. Overall, 20 errors, 13 near errors, and 22 observed errors were reported. The perceived potential consequences for the majority of errors were minor; however, 11 errors were associated with moderate and four with potentially severe consequences. Nurses reported that they had trouble falling asleep on 26.8% of days, had frequent arousals on 34.0% of days, and that work-related concerns were either partially or fully responsible for their sleep disruption on 12.5% of occasions. Fourteen out of the 23 nurses reported using a sleep aid. The most commonly reported sleep aids were prescription medications (62.7%), followed by alcohol (26.9%). Total sleep duration was significantly shorter on workdays than days off (p < 0.01). In comparison to other workdays, sleep was significantly shorter on days when an error (p < 0.05) or a near error (p < 0.01) was recorded. In contrast, sleep was higher on workdays when someone else's error was recorded (p = 0.08). Logistic regression analysis indicated that sleep duration was a significant predictor of error occurrence (chi2 = 6.739, p = 0.009, e beta = 0.727). The findings of this pilot study suggest that Australian nurses experience sleepiness and related physical symptoms at work and during their trip home. Further, a measurable number of errors occur of various types and severity. Less sleep may lead to the increased likelihood of making an error, and importantly, the decreased likelihood of catching someone else's error. These pilot results suggest that further investigation into the effects of sleep loss in nursing may be necessary for patient safety from an individual nurse perspective and from a healthcare team perspective.
Fargen, Kyle M; Friedman, William A
2014-01-01
During the last 2 decades, there has been a shift in the U.S. health care system towards improving the quality of health care provided by enhancing patient safety and reducing medical errors. Unfortunately, surgical complications, patient harm events, and malpractice claims remain common in the field of neurosurgery. Many of these events are potentially avoidable. There are an increasing number of publications in the medical literature in which authors address cognitive errors in diagnosis and treatment and strategies for reducing such errors, but these are for the most part absent in the neurosurgical literature. The purpose of this article is to highlight the complexities of medical decision making to a neurosurgical audience, with the hope of providing insight into the biases that lead us towards error and strategies to overcome our innate cognitive deficiencies. To accomplish this goal, we review the current literature on medical errors and just culture, explain the dual process theory of cognition, identify common cognitive errors affecting neurosurgeons in practice, review cognitive debiasing strategies, and finally provide simple methods that can be easily assimilated into neurosurgical practice to improve clinical decision making. Copyright © 2014 Elsevier Inc. All rights reserved.
Schroeder, Scott R; Salomon, Meghan M; Galanter, William L; Schiff, Gordon D; Vaida, Allen J; Gaunt, Michael J; Bryson, Michelle L; Rash, Christine; Falck, Suzanne; Lambert, Bruce L
2017-05-01
Drug name confusion is a common type of medication error and a persistent threat to patient safety. In the USA, roughly one per thousand prescriptions results in the wrong drug being filled, and most of these errors involve drug names that look or sound alike. Prior to approval, drug names undergo a variety of tests to assess their potential for confusability, but none of these preapproval tests has been shown to predict real-world error rates. We conducted a study to assess the association between error rates in laboratory-based tests of drug name memory and perception and real-world drug name confusion error rates. Eighty participants, comprising doctors, nurses, pharmacists, technicians and lay people, completed a battery of laboratory tests assessing visual perception, auditory perception and short-term memory of look-alike and sound-alike drug name pairs (eg, hydroxyzine/hydralazine). Laboratory test error rates (and other metrics) significantly predicted real-world error rates obtained from a large, outpatient pharmacy chain, with the best-fitting model accounting for 37% of the variance in real-world error rates. Cross-validation analyses confirmed these results, showing that the laboratory tests also predicted errors from a second pharmacy chain, with 45% of the variance being explained by the laboratory test data. Across two distinct pharmacy chains, there is a strong and significant association between drug name confusion error rates observed in the real world and those observed in laboratory-based tests of memory and perception. Regulators and drug companies seeking a validated preapproval method for identifying confusing drug names ought to consider using these simple tests. By using a standard battery of memory and perception tests, it should be possible to reduce the number of confusing look-alike and sound-alike drug name pairs that reach the market, which will help protect patients from potentially harmful medication errors. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, J; Wang, J; P, J
2016-06-15
Purpose: To optimize the clinical processes of radiotherapy and to reduce the radiotherapy risks by implementing the powerful risk management tools of failure mode and effects analysis(FMEA) and PDCA(plan-do-check-act). Methods: A multidiciplinary QA(Quality Assurance) team from our department consisting of oncologists, physicists, dosimetrists, therapists and administrator was established and an entire workflow QA process management using FMEA and PDCA tools was implemented for the whole treatment process. After the primary process tree was created, the failure modes and Risk priority numbers(RPNs) were determined by each member, and then the RPNs were averaged after team discussion. Results: 3 of 9 failuremore » modes with RPN above 100 in the practice were identified in the first PDCA cycle, which were further analyzed to investigate the RPNs: including of patient registration error, prescription error and treating wrong patient. New process controls reduced the occurrence, or detectability scores from the top 3 failure modes. Two important corrective actions reduced the highest RPNs from 300 to 50, and the error rate of radiotherapy decreased remarkably. Conclusion: FMEA and PDCA are helpful in identifying potential problems in the radiotherapy process, which was proven to improve the safety, quality and efficiency of radiation therapy in our department. The implementation of the FMEA approach may improve the understanding of the overall process of radiotherapy while may identify potential flaws in the whole process. Further more, repeating the PDCA cycle can bring us closer to the goal: higher safety and accuracy radiotherapy.« less
A simulation study to quantify the impacts of exposure ...
BackgroundExposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of air pollution and health.MethodsZIP-code level estimates of exposure for six pollutants (CO, NOx, EC, PM2.5, SO4, O3) from 1999 to 2002 in the Atlanta metropolitan area were used to calculate spatial, population (i.e. ambient versus personal), and total exposure measurement error.Empirically determined covariance of pollutant concentration pairs and the associated measurement errors were used to simulate true exposure (exposure without error) from observed exposure. Daily emergency department visits for respiratory diseases were simulated using a Poisson time-series model with a main pollutant RR = 1.05 per interquartile range, and a null association for the copollutant (RR = 1). Monte Carlo experiments were used to evaluate the impacts of correlated exposure errors of different copollutant pairs.ResultsSubstantial attenuation of RRs due to exposure error was evident in nearly all copollutant pairs studied, ranging from 10 to 40% attenuation for spatial error, 3–85% for population error, and 31–85% for total error. When CO, NOx or EC is the main pollutant, we demonstrated the possibility of false positives, specifically identifying significant, positive associations for copoll
Motyer, R E; Liddy, S; Torreggiani, W C; Buckley, O
2016-11-01
Voice recognition (VR) dictation of radiology reports has become the mainstay of reporting in many institutions worldwide. Despite benefit, such software is not without limitations, and transcription errors have been widely reported. Evaluate the frequency and nature of non-clinical transcription error using VR dictation software. Retrospective audit of 378 finalised radiology reports. Errors were counted and categorised by significance, error type and sub-type. Data regarding imaging modality, report length and dictation time was collected. 67 (17.72 %) reports contained ≥1 errors, with 7 (1.85 %) containing 'significant' and 9 (2.38 %) containing 'very significant' errors. A total of 90 errors were identified from the 378 reports analysed, with 74 (82.22 %) classified as 'insignificant', 7 (7.78 %) as 'significant', 9 (10 %) as 'very significant'. 68 (75.56 %) errors were 'spelling and grammar', 20 (22.22 %) 'missense' and 2 (2.22 %) 'nonsense'. 'Punctuation' error was most common sub-type, accounting for 27 errors (30 %). Complex imaging modalities had higher error rates per report and sentence. Computed tomography contained 0.040 errors per sentence compared to plain film with 0.030. Longer reports had a higher error rate, with reports >25 sentences containing an average of 1.23 errors per report compared to 0-5 sentences containing 0.09. These findings highlight the limitations of VR dictation software. While most error was deemed insignificant, there were occurrences of error with potential to alter report interpretation and patient management. Longer reports and reports on more complex imaging had higher error rates and this should be taken into account by the reporting radiologist.
Acute Respiratory Distress Syndrome Measurement Error. Potential Effect on Clinical Study Results
Cooke, Colin R.; Iwashyna, Theodore J.; Hofer, Timothy P.
2016-01-01
Rationale: Identifying patients with acute respiratory distress syndrome (ARDS) is a recognized challenge. Experts often have only moderate agreement when applying the clinical definition of ARDS to patients. However, no study has fully examined the implications of low reliability measurement of ARDS on clinical studies. Objectives: To investigate how the degree of variability in ARDS measurement commonly reported in clinical studies affects study power, the accuracy of treatment effect estimates, and the measured strength of risk factor associations. Methods: We examined the effect of ARDS measurement error in randomized clinical trials (RCTs) of ARDS-specific treatments and cohort studies using simulations. We varied the reliability of ARDS diagnosis, quantified as the interobserver reliability (κ-statistic) between two reviewers. In RCT simulations, patients identified as having ARDS were enrolled, and when measurement error was present, patients without ARDS could be enrolled. In cohort studies, risk factors as potential predictors were analyzed using reviewer-identified ARDS as the outcome variable. Measurements and Main Results: Lower reliability measurement of ARDS during patient enrollment in RCTs seriously degraded study power. Holding effect size constant, the sample size necessary to attain adequate statistical power increased by more than 50% as reliability declined, although the result was sensitive to ARDS prevalence. In a 1,400-patient clinical trial, the sample size necessary to maintain similar statistical power increased to over 1,900 when reliability declined from perfect to substantial (κ = 0.72). Lower reliability measurement diminished the apparent effectiveness of an ARDS-specific treatment from a 15.2% (95% confidence interval, 9.4–20.9%) absolute risk reduction in mortality to 10.9% (95% confidence interval, 4.7–16.2%) when reliability declined to moderate (κ = 0.51). In cohort studies, the effect on risk factor associations was similar. Conclusions: ARDS measurement error can seriously degrade statistical power and effect size estimates of clinical studies. The reliability of ARDS measurement warrants careful attention in future ARDS clinical studies. PMID:27159648
A Contrastive Approach for Teaching English as a Second Language to Indochinese Students.
ERIC Educational Resources Information Center
Phap, Dam Trung
The manual concentrates on features of English and Indochinese which are dissimilar and, therefore, potential problem areas. These areas were identified through: (1) a contrastive analysis of English and Indochinese (Lao, Cambodian, Vietnamese) phonology, morphology, and syntax; (2) an analysis of Indochinese students' errors; and (3) noting the…
Drug packaging. A key factor to be taken into account when choosing a treatment.
2011-10-01
A drug's packaging contributes to its harm-benefit balance. Highlighting the key practical information and identifying potential sources of error or mix-ups is part and parcel of the correct use of medicines. Select labelling that clearly and prominently displays the important information, including the international nonproprietary name (INN).
Bayesian network models for error detection in radiotherapy plans
NASA Astrophysics Data System (ADS)
Kalet, Alan M.; Gennari, John H.; Ford, Eric C.; Phillips, Mark H.
2015-04-01
The purpose of this study is to design and develop a probabilistic network for detecting errors in radiotherapy plans for use at the time of initial plan verification. Our group has initiated a multi-pronged approach to reduce these errors. We report on our development of Bayesian models of radiotherapy plans. Bayesian networks consist of joint probability distributions that define the probability of one event, given some set of other known information. Using the networks, we find the probability of obtaining certain radiotherapy parameters, given a set of initial clinical information. A low probability in a propagated network then corresponds to potential errors to be flagged for investigation. To build our networks we first interviewed medical physicists and other domain experts to identify the relevant radiotherapy concepts and their associated interdependencies and to construct a network topology. Next, to populate the network’s conditional probability tables, we used the Hugin Expert software to learn parameter distributions from a subset of de-identified data derived from a radiation oncology based clinical information database system. These data represent 4990 unique prescription cases over a 5 year period. Under test case scenarios with approximately 1.5% introduced error rates, network performance produced areas under the ROC curve of 0.88, 0.98, and 0.89 for the lung, brain and female breast cancer error detection networks, respectively. Comparison of the brain network to human experts performance (AUC of 0.90 ± 0.01) shows the Bayes network model performs better than domain experts under the same test conditions. Our results demonstrate the feasibility and effectiveness of comprehensive probabilistic models as part of decision support systems for improved detection of errors in initial radiotherapy plan verification procedures.
Randhawa, Amarita S; Babalola, Olakiitan; Henney, Zachary; Miller, Michele; Nelson, Tanya; Oza, Meerat; Patel, Chandni; Randhawa, Anupma S; Riley, Joyce; Snyder, Scott; So, Sherri
2016-05-01
Online drug information compendia (ODIC) are valuable tools that health care professionals (HCPs) and consumers use to educate themselves on pharmaceutical products. Research suggests that these resources, although informative and easily accessible, may contain misinformation, posing risk for product misuse and patient harm. Evaluate drug summaries within ODIC for accuracy and completeness and identify product-specific misinformation. Between August 2014 and January 2015, medical information (MI) specialists from 11 pharmaceutical/biotechnology companies systematically evaluated 270 drug summaries within 5 commonly used ODIC for misinformation. Using a standardized approach, errors were identified; classified as inaccurate, incomplete, or omitted; and categorized per sections of the Full Prescribing Information (FPI). On review of each drug summary, content-correction requests were proposed and supported by the respective product's FPI. Across the 270 drug summaries reviewed within the 5 compendia, the median of the total number of errors identified was 782, with the greatest number of errors occurring in the categories of Dosage and Administration, Patient Education, and Warnings and Precautions. The majority of errors were classified as incomplete, followed by inaccurate and omitted. This analysis demonstrates that ODIC may contain misinformation. HCPs and consumers should be aware of the potential for misinformation and consider more than 1 drug information resource, including the FPI and Medication Guide as well as pharmaceutical/biotechnology companies' MI departments, to obtain unbiased, accurate, and complete product-specific drug information to help support the safe and effective use of prescription drug products. © The Author(s) 2016.
Tariq, Amina; Georgiou, Andrew; Westbrook, Johanna
2013-05-01
Medication safety is a pressing concern for residential aged care facilities (RACFs). Retrospective studies in RACF settings identify inadequate communication between RACFs, doctors, hospitals and community pharmacies as the major cause of medication errors. Existing literature offers limited insight about the gaps in the existing information exchange process that may lead to medication errors. The aim of this research was to explicate the cognitive distribution that underlies RACF medication ordering and delivery to identify gaps in medication-related information exchange which lead to medication errors in RACFs. The study was undertaken in three RACFs in Sydney, Australia. Data were generated through ethnographic field work over a period of five months (May-September 2011). Triangulated analysis of data primarily focused on examining the transformation and exchange of information between different media across the process. The findings of this study highlight the extensive scope and intense nature of information exchange in RACF medication ordering and delivery. Rather than attributing error to individual care providers, the explication of distributed cognition processes enabled the identification of gaps in three information exchange dimensions which potentially contribute to the occurrence of medication errors namely: (1) design of medication charts which complicates order processing and record keeping (2) lack of coordination mechanisms between participants which results in misalignment of local practices (3) reliance on restricted communication bandwidth channels mainly telephone and fax which complicates the information processing requirements. The study demonstrates how the identification of these gaps enhances understanding of medication errors in RACFs. Application of the theoretical lens of distributed cognition can assist in enhancing our understanding of medication errors in RACFs through identification of gaps in information exchange. Understanding the dynamics of the cognitive process can inform the design of interventions to manage errors and improve residents' safety. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Using voluntary reports from physicians to learn from diagnostic errors in emergency medicine.
Okafor, Nnaemeka; Payne, Velma L; Chathampally, Yashwant; Miller, Sara; Doshi, Pratik; Singh, Hardeep
2016-04-01
Diagnostic errors are common in the emergency department (ED), but few studies have comprehensively evaluated their types and origins. We analysed incidents reported by ED physicians to determine disease conditions, contributory factors and patient harm associated with ED-related diagnostic errors. Between 1 March 2009 and 31 December 2013, ED physicians reported 509 incidents using a department-specific voluntary incident-reporting system that we implemented at two large academic hospital-affiliated EDs. For this study, we analysed 209 incidents related to diagnosis. A quality assurance team led by an ED physician champion reviewed each incident and interviewed physicians when necessary to confirm the presence/absence of diagnostic error and to determine the contributory factors. We generated descriptive statistics quantifying disease conditions involved, contributory factors and patient harm from errors. Among the 209 incidents, we identified 214 diagnostic errors associated with 65 unique diseases/conditions, including sepsis (9.6%), acute coronary syndrome (9.1%), fractures (8.6%) and vascular injuries (8.6%). Contributory factors included cognitive (n=317), system related (n=192) and non-remedial (n=106). Cognitive factors included faulty information verification (41.3%) and faulty information processing (30.6%) whereas system factors included high workload (34.4%) and inefficient ED processes (40.1%). Non-remediable factors included atypical presentation (31.3%) and the patients' inability to provide a history (31.3%). Most errors (75%) involved multiple factors. Major harm was associated with 34/209 (16.3%) of reported incidents. Most diagnostic errors in ED appeared to relate to common disease conditions. While sustaining diagnostic error reporting programmes might be challenging, our analysis reveals the potential value of such systems in identifying targets for improving patient safety in the ED. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Analyzing temozolomide medication errors: potentially fatal.
Letarte, Nathalie; Gabay, Michael P; Bressler, Linda R; Long, Katie E; Stachnik, Joan M; Villano, J Lee
2014-10-01
The EORTC-NCIC regimen for glioblastoma requires different dosing of temozolomide (TMZ) during radiation and maintenance therapy. This complexity is exacerbated by the availability of multiple TMZ capsule strengths. TMZ is an alkylating agent and the major toxicity of this class is dose-related myelosuppression. Inadvertent overdose can be fatal. The websites of the Institute for Safe Medication Practices (ISMP), and the Food and Drug Administration (FDA) MedWatch database were reviewed. We searched the MedWatch database for adverse events associated with TMZ and obtained all reports including hematologic toxicity submitted from 1st November 1997 to 30th May 2012. The ISMP describes errors with TMZ resulting from the positioning of information on the label of the commercial product. The strength and quantity of capsules on the label were in close proximity to each other, and this has been changed by the manufacturer. MedWatch identified 45 medication errors. Patient errors were the most common, accounting for 21 or 47% of errors, followed by dispensing errors, which accounted for 13 or 29%. Seven reports or 16% were errors in the prescribing of TMZ. Reported outcomes ranged from reversible hematological adverse events (13%), to hospitalization for other adverse events (13%) or death (18%). Four error reports lacked detail and could not be categorized. Although the FDA issued a warning in 2003 regarding fatal medication errors and the product label warns of overdosing, errors in TMZ dosing occur for various reasons and involve both healthcare professionals and patients. Overdosing errors can be fatal.
Triage: an investigation of the process and potential vulnerabilities.
Hitchcock, Maree; Gillespie, Brigid; Crilly, Julia; Chaboyer, Wendy
2014-07-01
To explore and describe the triage process in the Emergency Department to identify problems and potential vulnerabilities that may affect the triage process. Triage is the first step in the patient journey in the Emergency Department and is often the front line in reducing the potential for errors and mistakes. A fieldwork study to provide an in-depth appreciation and understanding of the triage process. Fieldwork included unstructured observer-only observation, field notes, informal and formal interviews that were conducted over the months of June, July and August 2012. Over 170 hours of observation were performed covering day, evening and night shifts, 7 days of the week. Sixty episodes of triage were observed; 31 informal interviews and 14 formal interviews were completed. Thematic analysis was used. Three themes were identified from the analysis of the data and included: 'negotiating patient flow and care delivery through the Emergency Department'; 'interdisciplinary team communicating and collaborating to provide appropriate and safe care to patients'; and 'varying levels of competence of the triage nurse'. In these themes, vulnerabilities and problems described included over and under triage, extended time to triage assessment, triage errors, multiple patients arriving simultaneously, emergency department and hospital overcrowding. Findings suggest that vulnerabilities in the triage process may cause disruptions to patient flow and compromise care, thus potentially impacting nurses' ability to provide safe and effective care. © 2013 John Wiley & Sons Ltd.
WE-H-BRC-05: Catastrophic Error Metrics for Radiation Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murphy, S; Molloy, J
Purpose: Intuitive evaluation of complex radiotherapy treatments is impractical, while data transfer anomalies create the potential for catastrophic treatment delivery errors. Contrary to prevailing wisdom, logical scrutiny can be applied to patient-specific machine settings. Such tests can be automated, applied at the point of treatment delivery and can be dissociated from prior states of the treatment plan, potentially revealing errors introduced early in the process. Methods: Analytical metrics were formulated for conventional and intensity modulated RT (IMRT) treatments. These were designed to assess consistency between monitor unit settings, wedge values, prescription dose and leaf positioning (IMRT). Institutional metric averages formore » 218 clinical plans were stratified over multiple anatomical sites. Treatment delivery errors were simulated using a commercial treatment planning system and metric behavior assessed via receiver-operator-characteristic (ROC) analysis. A positive result was returned if the erred plan metric value exceeded a given number of standard deviations, e.g. 2. The finding was declared true positive if the dosimetric impact exceeded 25%. ROC curves were generated over a range of metric standard deviations. Results: Data for the conventional treatment metric indicated standard deviations of 3%, 12%, 11%, 8%, and 5 % for brain, pelvis, abdomen, lung and breast sites, respectively. Optimum error declaration thresholds yielded true positive rates (TPR) between 0.7 and 1, and false positive rates (FPR) between 0 and 0.2. Two proposed IMRT metrics possessed standard deviations of 23% and 37%. The superior metric returned TPR and FPR of 0.7 and 0.2, respectively, when both leaf position and MUs were modelled. Isolation to only leaf position errors yielded TPR and FPR values of 0.9 and 0.1. Conclusion: Logical tests can reveal treatment delivery errors and prevent large, catastrophic errors. Analytical metrics are able to identify errors in monitor units, wedging and leaf positions with favorable sensitivity and specificity. In part by Varian.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rothenberg, Daniel; Wang, Chien
We describe an emulator of a detailed cloud parcel model which has been trained to assess droplet nucleation from a complex, multimodal aerosol size distribution simulated by a global aerosol–climate model. The emulator is constructed using a sensitivity analysis approach (polynomial chaos expansion) which reproduces the behavior of the targeted parcel model across the full range of aerosol properties and meteorology simulated by the parent climate model. An iterative technique using aerosol fields sampled from a global model is used to identify the critical aerosol size distribution parameters necessary for accurately predicting activation. Across the large parameter space used tomore » train them, the emulators estimate cloud droplet number concentration (CDNC) with a mean relative error of 9.2% for aerosol populations without giant cloud condensation nuclei (CCN) and 6.9% when including them. Versus a parcel model driven by those same aerosol fields, the best-performing emulator has a mean relative error of 4.6%, which is comparable with two commonly used activation schemes also evaluated here (which have mean relative errors of 2.9 and 6.7%, respectively). We identify the potential for regional biases in modeled CDNC, particularly in oceanic regimes, where our best-performing emulator tends to overpredict by 7%, whereas the reference activation schemes range in mean relative error from -3 to 7%. The emulators which include the effects of giant CCN are more accurate in continental regimes (mean relative error of 0.3%) but strongly overestimate CDNC in oceanic regimes by up to 22%, particularly in the Southern Ocean. Finally, the biases in CDNC resulting from the subjective choice of activation scheme could potentially influence the magnitude of the indirect effect diagnosed from the model incorporating it.« less
Rothenberg, Daniel; Wang, Chien
2017-04-27
We describe an emulator of a detailed cloud parcel model which has been trained to assess droplet nucleation from a complex, multimodal aerosol size distribution simulated by a global aerosol–climate model. The emulator is constructed using a sensitivity analysis approach (polynomial chaos expansion) which reproduces the behavior of the targeted parcel model across the full range of aerosol properties and meteorology simulated by the parent climate model. An iterative technique using aerosol fields sampled from a global model is used to identify the critical aerosol size distribution parameters necessary for accurately predicting activation. Across the large parameter space used tomore » train them, the emulators estimate cloud droplet number concentration (CDNC) with a mean relative error of 9.2% for aerosol populations without giant cloud condensation nuclei (CCN) and 6.9% when including them. Versus a parcel model driven by those same aerosol fields, the best-performing emulator has a mean relative error of 4.6%, which is comparable with two commonly used activation schemes also evaluated here (which have mean relative errors of 2.9 and 6.7%, respectively). We identify the potential for regional biases in modeled CDNC, particularly in oceanic regimes, where our best-performing emulator tends to overpredict by 7%, whereas the reference activation schemes range in mean relative error from -3 to 7%. The emulators which include the effects of giant CCN are more accurate in continental regimes (mean relative error of 0.3%) but strongly overestimate CDNC in oceanic regimes by up to 22%, particularly in the Southern Ocean. Finally, the biases in CDNC resulting from the subjective choice of activation scheme could potentially influence the magnitude of the indirect effect diagnosed from the model incorporating it.« less
Haynie, Alan C.
2016-01-01
Time spent fishing is the effort metric often studied in fisheries but it may under-represent the effort actually expended by fishers. Entire fishing trips, from the time vessels leave port until they return, may prove more useful for examining trends in fleet dynamics, fisher behavior, and fishing costs. However, such trip information is often difficult to resolve. We identified ~30,000 trips made by vessels that targeted walleye pollock (Gadus chalcogrammus) in the Eastern Bering Sea from 2008–2014 by using vessel monitoring system (VMS) and landings data. We compared estimated trip durations to observer data, which were available for approximately half of trips. Total days at sea were estimated with < 1.5% error and 96.4% of trip durations were either estimated with < 5% error or they were within expected measurement error. With 99% accuracy, we classified trips as fishing for pollock, for another target species, or not fishing. This accuracy lends strong support to the use of our method with unobserved trips across North Pacific fisheries. With individual trips resolved, we examined potential errors in datasets which are often viewed as “the truth.” Despite having > 5 million VMS records (timestamps and vessel locations), this study was as much about understanding and managing data errors as it was about characterizing trips. Missing VMS records were pervasive and they strongly influenced our approach. To understand implications of missing data on inference, we simulated removal of VMS records from trips. Removal of records straightened (i.e., shortened) vessel trajectories, and travel distances were underestimated, on average, by 1.5–13.4% per trip. Despite this bias, VMS proved robust for trip characterization and for improved quality control of human-recorded data. Our scrutiny of human-reported and VMS data advanced our understanding of the potential utility and challenges facing VMS users globally. PMID:27788174
Watson, Jordan T; Haynie, Alan C
2016-01-01
Time spent fishing is the effort metric often studied in fisheries but it may under-represent the effort actually expended by fishers. Entire fishing trips, from the time vessels leave port until they return, may prove more useful for examining trends in fleet dynamics, fisher behavior, and fishing costs. However, such trip information is often difficult to resolve. We identified ~30,000 trips made by vessels that targeted walleye pollock (Gadus chalcogrammus) in the Eastern Bering Sea from 2008-2014 by using vessel monitoring system (VMS) and landings data. We compared estimated trip durations to observer data, which were available for approximately half of trips. Total days at sea were estimated with < 1.5% error and 96.4% of trip durations were either estimated with < 5% error or they were within expected measurement error. With 99% accuracy, we classified trips as fishing for pollock, for another target species, or not fishing. This accuracy lends strong support to the use of our method with unobserved trips across North Pacific fisheries. With individual trips resolved, we examined potential errors in datasets which are often viewed as "the truth." Despite having > 5 million VMS records (timestamps and vessel locations), this study was as much about understanding and managing data errors as it was about characterizing trips. Missing VMS records were pervasive and they strongly influenced our approach. To understand implications of missing data on inference, we simulated removal of VMS records from trips. Removal of records straightened (i.e., shortened) vessel trajectories, and travel distances were underestimated, on average, by 1.5-13.4% per trip. Despite this bias, VMS proved robust for trip characterization and for improved quality control of human-recorded data. Our scrutiny of human-reported and VMS data advanced our understanding of the potential utility and challenges facing VMS users globally.
Parsons, Thomas D; McMahan, Timothy; Kane, Robert
2018-01-01
Clinical neuropsychologists have long underutilized computer technologies for neuropsychological assessment. Given the rapid advances in technology (e.g. virtual reality; tablets; iPhones) and the increased accessibility in the past decade, there is an on-going need to identify optimal specifications for advanced technologies while minimizing potential sources of error. Herein, we discuss concerns raised by a joint American Academy of Clinical Neuropsychology/National Academy of Neuropsychology position paper. Moreover, we proffer parameters for the development and use of advanced technologies in neuropsychological assessments. We aim to first describe software and hardware configurations that can impact a computerized neuropsychological assessment. This is followed by a description of best practices for developers and practicing neuropsychologists to minimize error in neuropsychological assessments using advanced technologies. We also discuss the relevance of weighing potential computer error in light of possible errors associated with traditional testing. Throughout there is an emphasis on the need for developers to provide bench test results for their software's performance on various devices and minimum specifications (documented in manuals) for the hardware (e.g. computer, monitor, input devices) in the neuropsychologist's practice. Advances in computerized assessment platforms offer both opportunities and challenges. The challenges can appear daunting but are a manageable and require informed consumers who can appreciate the issues and ask pertinent questions in evaluating their options.
MacKay, Mark; Anderson, Collin; Boehme, Sabrina; Cash, Jared; Zobell, Jeffery
2016-04-01
The Institute for Safe Medication Practices has stated that parenteral nutrition (PN) is considered a high-risk medication and has the potential of causing harm. Three organizations--American Society for Parenteral and Enteral Nutrition (A.S.P.E.N.), American Society of Health-System Pharmacists, and National Advisory Group--have published guidelines for ordering, transcribing, compounding and administering PN. These national organizations have published data on compliance to the guidelines and the risk of errors. The purpose of this article is to compare total compliance with ordering, transcription, compounding, administration, and error rate with a large pediatric institution. A computerized prescriber order entry (CPOE) program was developed that incorporates dosing with soft and hard stop recommendations and simultaneously eliminating the need for paper transcription. A CPOE team prioritized and identified issues, then developed solutions and integrated innovative CPOE and automated compounding device (ACD) technologies and practice changes to minimize opportunities for medication errors in PN prescription, transcription, preparation, and administration. Thirty developmental processes were identified and integrated in the CPOE program, resulting in practices that were compliant with A.S.P.E.N. safety consensus recommendations. Data from 7 years of development and implementation were analyzed and compared with published literature comparing error, harm rates, and cost reductions to determine if our process showed lower error rates compared with national outcomes. The CPOE program developed was in total compliance with the A.S.P.E.N. guidelines for PN. The frequency of PN medication errors at our hospital over the 7 years was 230 errors/84,503 PN prescriptions, or 0.27% compared with national data that determined that 74 of 4730 (1.6%) of prescriptions over 1.5 years were associated with a medication error. Errors were categorized by steps in the PN process: prescribing, transcription, preparation, and administration. There were no transcription errors, and most (95%) errors occurred during administration. We conclude that PN practices that conferred a meaningful cost reduction and a lower error rate (2.7/1000 PN) than reported in the literature (15.6/1000 PN) were ascribed to the development and implementation of practices that conform to national PN guidelines and recommendations. Electronic ordering and compounding programs eliminated all transcription and related opportunities for errors. © 2015 American Society for Parenteral and Enteral Nutrition.
García-Molina Sáez, C; Urbieta Sanz, E; Madrigal de Torres, M; Vicente Vera, T; Pérez Cárceles, M D
2016-04-01
It is well known that medication reconciliation at discharge is a key strategy to ensure proper drug prescription and the effectiveness and safety of any treatment. Different types of interventions to reduce reconciliation errors at discharge have been tested, many of which are based on the use of electronic tools as they are useful to optimize the medication reconciliation process. However, not all countries are progressing at the same speed in this task and not all tools are equally effective. So it is important to collate updated country-specific data in order to identify possible strategies for improvement in each particular region. Our aim therefore was to analyse the effectiveness of a computerized pharmaceutical intervention to reduce reconciliation errors at discharge in Spain. A quasi-experimental interrupted time-series study was carried out in the cardio-pneumology unit of a general hospital from February to April 2013. The study consisted of three phases: pre-intervention, intervention and post-intervention, each involving 23 days of observations. At the intervention period, a pharmacist was included in the medical team and entered the patient's pre-admission medication in a computerized tool integrated into the electronic clinical history of the patient. The effectiveness was evaluated by the differences between the mean percentages of reconciliation errors in each period using a Mann-Whitney U test accompanied by Bonferroni correction, eliminating autocorrelation of the data by first using an ARIMA analysis. In addition, the types of error identified and their potential seriousness were analysed. A total of 321 patients (119, 105 and 97 in each phase, respectively) were included in the study. For the 3966 medicaments recorded, 1087 reconciliation errors were identified in 77·9% of the patients. The mean percentage of reconciliation errors per patient in the first period of the study was 42·18%, falling to 19·82% during the intervention period (P = 0·000). When the intervention was withdrawn, the mean percentage of reconciliation errors increased again to 27·72% (P = 0·008). The difference between the percentages of pre- and post-intervention periods was statistically significant (P = 0·000). Most reconciliation errors were due to omission (46·7%) or incomplete prescription (43·8%), and 35·3% of which could have caused harm to the patient. A computerized pharmaceutical intervention is shown to reduce reconciliation errors in the context of a high incidence of such errors. © 2016 John Wiley & Sons Ltd.
Bailey, Stephanie L.; Bono, Rose S.; Nash, Denis; Kimmel, April D.
2018-01-01
Background Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. Methods We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. Results We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Conclusions Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited. PMID:29570737
Bailey, Stephanie L; Bono, Rose S; Nash, Denis; Kimmel, April D
2018-01-01
Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited.
Siewert, Bettina; Brook, Olga R; Hochman, Mary; Eisenberg, Ronald L
2016-03-01
The purpose of this study is to analyze the impact of communication errors on patient care, customer satisfaction, and work-flow efficiency and to identify opportunities for quality improvement. We performed a search of our quality assurance database for communication errors submitted from August 1, 2004, through December 31, 2014. Cases were analyzed regarding the step in the imaging process at which the error occurred (i.e., ordering, scheduling, performance of examination, study interpretation, or result communication). The impact on patient care was graded on a 5-point scale from none (0) to catastrophic (4). The severity of impact between errors in result communication and those that occurred at all other steps was compared. Error evaluation was performed independently by two board-certified radiologists. Statistical analysis was performed using the chi-square test and kappa statistics. Three hundred eighty of 422 cases were included in the study. One hundred ninety-nine of the 380 communication errors (52.4%) occurred at steps other than result communication, including ordering (13.9%; n = 53), scheduling (4.7%; n = 18), performance of examination (30.0%; n = 114), and study interpretation (3.7%; n = 14). Result communication was the single most common step, accounting for 47.6% (181/380) of errors. There was no statistically significant difference in impact severity between errors that occurred during result communication and those that occurred at other times (p = 0.29). In 37.9% of cases (144/380), there was an impact on patient care, including 21 minor impacts (5.5%; result communication, n = 13; all other steps, n = 8), 34 moderate impacts (8.9%; result communication, n = 12; all other steps, n = 22), and 89 major impacts (23.4%; result communication, n = 45; all other steps, n = 44). In 62.1% (236/380) of cases, no impact was noted, but 52.6% (200/380) of cases had the potential for an impact. Among 380 communication errors in a radiology department, 37.9% had a direct impact on patient care, with an additional 52.6% having a potential impact. Most communication errors (52.4%) occurred at steps other than result communication, with similar severity of impact.
Learning from patients: Identifying design features of medicines that cause medication use problems.
Notenboom, Kim; Leufkens, Hubert Gm; Vromans, Herman; Bouvy, Marcel L
2017-01-30
Usability is a key factor in ensuring safe and efficacious use of medicines. However, several studies showed that people experience a variety of problems using their medicines. The purpose of this study was to identify design features of oral medicines that cause use problems among older patients in daily practice. A qualitative study with semi-structured interviews on the experiences of older people with the use of their medicines was performed (n=59). Information on practical problems, strategies to overcome these problems and the medicines' design features that caused these problems were collected. The practical problems and management strategies were categorised into 'use difficulties' and 'use errors'. A total of 158 use problems were identified, of which 45 were categorized as use difficulties and 113 as use error. Design features that contributed the most to the occurrence of use difficulties were the dimensions and surface texture of the dosage form (29.6% and 18.5%, respectively). Design features that contributed the most to the occurrence of use errors were the push-through force of blisters (22.1%) and tamper evident packaging (12.1%). These findings will help developers of medicinal products to proactively address potential usability issues with their medicines. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Adverse Drug Events caused by Serious Medication Administration Errors
Sawarkar, Abhivyakti; Keohane, Carol A.; Maviglia, Saverio; Gandhi, Tejal K; Poon, Eric G
2013-01-01
OBJECTIVE To determine how often serious or life-threatening medication administration errors with the potential to cause patient harm (or potential adverse drug events) result in actual patient harm (or adverse drug events (ADEs)) in the hospital setting. DESIGN Retrospective chart review of clinical events that transpired following observed medication administration errors. BACKGROUND Medication errors are common at the medication administration stage for hospitalized patients. While many of these errors are considered capable of causing patient harm, it is not clear how often patients are actually harmed by these errors. METHODS In a previous study where 14,041 medication administrations in an acute-care hospital were directly observed, investigators discovered 1271 medication administration errors, of which 133 had the potential to cause serious or life-threatening harm to patients and were considered serious or life-threatening potential ADEs. In the current study, clinical reviewers conducted detailed chart reviews of cases where a serious or life-threatening potential ADE occurred to determine if an actual ADE developed following the potential ADE. Reviewers further assessed the severity of the ADE and attribution to the administration error. RESULTS Ten (7.5% [95% C.I. 6.98, 8.01]) actual adverse drug events or ADEs resulted from the 133 serious and life-threatening potential ADEs, of which 6 resulted in significant, three in serious, and one life threatening injury. Therefore 4 (3% [95% C.I. 2.12, 3.6]) serious and life threatening potential ADEs led to serious or life threatening ADEs. Half of the ten actual ADEs were caused by dosage or monitoring errors for anti-hypertensives. The life threatening ADE was caused by an error that was both a transcription and a timing error. CONCLUSION Potential ADEs at the medication administration stage can cause serious patient harm. Given previous estimates of serious or life-threatening potential ADE of 1.33 per 100 medication doses administered, in a hospital where 6 million doses are administered per year, about 4000 preventable ADEs would be attributable to medication administration errors annually. PMID:22791691
Rudin-Brown, Christina M; Kramer, Chelsea; Langerak, Robin; Scipione, Andrea; Kelsey, Shelley
2017-11-17
Although numerous research studies have reported high levels of error and misuse of child restraint systems (CRS) and booster seats in experimental and real-world scenarios, conclusions are limited because they provide little information regarding which installation issues pose the highest risk and thus should be targeted for change. Beneficial to legislating bodies and researchers alike would be a standardized, globally relevant assessment of the potential injury risk associated with more common forms of CRS and booster seat misuse, which could be applied with observed error frequency-for example, in car seat clinics or during prototype user testing-to better identify and characterize the installation issues of greatest risk to safety. A group of 8 leading world experts in CRS and injury biomechanics, who were members of an international child safety project, estimated the potential injury severity associated with common forms of CRS and booster seat misuse. These injury risk error severity score (ESS) ratings were compiled and compared to scores from previous research that had used a similar procedure but with fewer respondents. To illustrate their application, and as part of a larger study examining CRS and booster seat labeling requirements, the new standardized ESS ratings were applied to objective installation performance data from 26 adult participants who installed a convertible (rear- vs. forward-facing) CRS and booster seat in a vehicle, and a child test dummy in the CRS and booster seat, using labels that only just met minimal regulatory requirements. The outcome measure, the risk priority number (RPN), represented the composite scores of injury risk and observed installation error frequency. Variability within the sample of ESS ratings in the present study was smaller than that generated in previous studies, indicating better agreement among experts on what constituted injury risk. Application of the new standardized ESS ratings to installation performance data revealed several areas of misuse of the CRS/booster seat associated with high potential injury risk. Collectively, findings indicate that standardized ESS ratings are useful for estimating injury risk potential associated with real-world CRS and booster seat installation errors.
NASA Astrophysics Data System (ADS)
Gourdji, S. M.; Yadav, V.; Karion, A.; Mueller, K. L.; Conley, S.; Ryerson, T.; Nehrkorn, T.; Kort, E. A.
2018-04-01
Urban greenhouse gas (GHG) flux estimation with atmospheric measurements and modeling, i.e. the ‘top-down’ approach, can potentially support GHG emission reduction policies by assessing trends in surface fluxes and detecting anomalies from bottom-up inventories. Aircraft-collected GHG observations also have the potential to help quantify point-source emissions that may not be adequately sampled by fixed surface tower-based atmospheric observing systems. Here, we estimate CH4 emissions from a known point source, the Aliso Canyon natural gas leak in Los Angeles, CA from October 2015–February 2016, using atmospheric inverse models with airborne CH4 observations from twelve flights ≈4 km downwind of the leak and surface sensitivities from a mesoscale atmospheric transport model. This leak event has been well-quantified previously using various methods by the California Air Resources Board, thereby providing high confidence in the mass-balance leak rate estimates of (Conley et al 2016), used here for comparison to inversion results. Inversions with an optimal setup are shown to provide estimates of the leak magnitude, on average, within a third of the mass balance values, with remaining errors in estimated leak rates predominantly explained by modeled wind speed errors of up to 10 m s‑1, quantified by comparing airborne meteorological observations with modeled values along the flight track. An inversion setup using scaled observational wind speed errors in the model-data mismatch covariance matrix is shown to significantly reduce the influence of transport model errors on spatial patterns and estimated leak rates from the inversions. In sum, this study takes advantage of a natural tracer release experiment (i.e. the Aliso Canyon natural gas leak) to identify effective approaches for reducing the influence of transport model error on atmospheric inversions of point-source emissions, while suggesting future potential for integrating surface tower and aircraft atmospheric GHG observations in top-down urban emission monitoring systems.
Dionisio, Kathie L; Chang, Howard H; Baxter, Lisa K
2016-11-25
Exposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of air pollution and health. ZIP-code level estimates of exposure for six pollutants (CO, NO x , EC, PM 2.5 , SO 4 , O 3 ) from 1999 to 2002 in the Atlanta metropolitan area were used to calculate spatial, population (i.e. ambient versus personal), and total exposure measurement error. Empirically determined covariance of pollutant concentration pairs and the associated measurement errors were used to simulate true exposure (exposure without error) from observed exposure. Daily emergency department visits for respiratory diseases were simulated using a Poisson time-series model with a main pollutant RR = 1.05 per interquartile range, and a null association for the copollutant (RR = 1). Monte Carlo experiments were used to evaluate the impacts of correlated exposure errors of different copollutant pairs. Substantial attenuation of RRs due to exposure error was evident in nearly all copollutant pairs studied, ranging from 10 to 40% attenuation for spatial error, 3-85% for population error, and 31-85% for total error. When CO, NO x or EC is the main pollutant, we demonstrated the possibility of false positives, specifically identifying significant, positive associations for copollutants based on the estimated type I error rate. The impact of exposure error must be considered when interpreting results of copollutant epidemiologic models, due to the possibility of attenuation of main pollutant RRs and the increased probability of false positives when measurement error is present.
Fire in operating theatres: DaSH-ing to the rescue.
Wilson, Liam; Farooq, Omer
2018-01-01
Operating theatres are dynamic environments that require multi professional team interactions. Effective team working is essential for efficient delivery of safe patient care. A fire in the operating theatre is a rare but potentially life threatening event for both patients and staff. A rapid and cohesive response from theatre and allied staff including porters, fire safety officer etc is paramount. We delivered a training session that utilised in situ simulation (simulation in workplace). After conducting needs analysis, learning objectives were agreed. After thorough planning, the date and location of the training session were identified. Contingency plans were put in place to ensure that patient care was not compromised at any point. To ensure success, checklists for faculty were devised and adhered to. A medium fidelity manikin with live monitoring was used. The first part of the scenario involved management of a surgical emergency by theatre staff. The second part involved management of a fire in the operating theatre while an emergency procedure was being undertaken. To achieve maximum learning potential, debriefing was provided immediately after each part of the scenario. A fire safety officer was present as a content expert. Latent errors (hidden errors in the workplace, staff knowledge etc) were identified. Malfunctioning of theatre floor windows and staff unawareness about the location of an evacuation site were some of the identified latent errors. Thorough feedback to address these issues was provided to the participants on the day. A detailed report of the training session was given to the relevant departments. This resulted in the equipment faults being rectified. The training session was a very positive experience and helped not only in improving participants' knowledge, behaviour and confidence but also it made system and environment better equipped.
A Procedure for Studying the Cognitive Processes Used During Problem Solving: An Exploratory Study.
ERIC Educational Resources Information Center
Lester, Frank K., Jr.
This study explores the potential effectiveness of a new procedure for identifying and studying certain of the cognitive processes used during problem solving. The procedure is used in an attempt to categorize the types of conceptual thinking problem solvers employ, to study trial-and-error behavior, and to investigate problem solvers' abilities…
Modeling, Analyzing, and Mitigating Dissonance Between Alerting Systems
NASA Technical Reports Server (NTRS)
Song, Lixia; Kuchar, James K.
2003-01-01
Alerting systems are becoming pervasive in process operations, which may result in the potential for dissonance or conflict in information from different alerting systems that suggests different threat levels and/or actions to resolve hazards. Little is currently available to help in predicting or solving the dissonance problem. This thesis presents a methodology to model and analyze dissonance between alerting systems, providing both a theoretical foundation for understanding dissonance and a practical basis from which specific problems can be addressed. A state-space representation of multiple alerting system operation is generalized that can be tailored across a variety of applications. Based on the representation, two major causes of dissonance are identified: logic differences and sensor error. Additionally, several possible types of dissonance are identified. A mathematical analysis method is developed to identify the conditions for dissonance originating from logic differences. A probabilistic analysis methodology is developed to estimate the probability of dissonance originating from sensor error, and to compare the relative contribution to dissonance of sensor error against the contribution from logic differences. A hybrid model, which describes the dynamic behavior of the process with multiple alerting systems, is developed to identify dangerous dissonance space, from which the process can lead to disaster. Methodologies to avoid or mitigate dissonance are outlined. Two examples are used to demonstrate the application of the methodology. First, a conceptual In-Trail Spacing example is presented. The methodology is applied to identify the conditions for possible dissonance, to identify relative contribution of logic difference and sensor error, and to identify dangerous dissonance space. Several proposed mitigation methods are demonstrated in this example. In the second example, the methodology is applied to address the dissonance problem between two air traffic alert and avoidance systems: the existing Traffic Alert and Collision Avoidance System (TCAS) vs. the proposed Airborne Conflict Management system (ACM). Conditions on ACM resolution maneuvers are identified to avoid dynamic dissonance between TCAS and ACM. Also included in this report is an Appendix written by Lee Winder about recent and continuing work on alerting systems design. The application of Markov Decision Process (MDP) theory to complex alerting problems is discussed and illustrated with an abstract example system.
Tully, Mary P; Ashcroft, Darren M; Dornan, Tim; Lewis, Penny J; Taylor, David; Wass, Val
2009-01-01
Prescribing errors are common, they result in adverse events and harm to patients and it is unclear how best to prevent them because recommendations are more often based on surmized rather than empirically collected data. The aim of this systematic review was to identify all informative published evidence concerning the causes of and factors associated with prescribing errors in specialist and non-specialist hospitals, collate it, analyse it qualitatively and synthesize conclusions from it. Seven electronic databases were searched for articles published between 1985-July 2008. The reference lists of all informative studies were searched for additional citations. To be included, a study had to be of handwritten prescriptions for adult or child inpatients that reported empirically collected data on the causes of or factors associated with errors. Publications in languages other than English and studies that evaluated errors for only one disease, one route of administration or one type of prescribing error were excluded. Seventeen papers reporting 16 studies, selected from 1268 papers identified by the search, were included in the review. Studies from the US and the UK in university-affiliated hospitals predominated (10/16 [62%]). The definition of a prescribing error varied widely and the included studies were highly heterogeneous. Causes were grouped according to Reason's model of accident causation into active failures, error-provoking conditions and latent conditions. The active failure most frequently cited was a mistake due to inadequate knowledge of the drug or the patient. Skills-based slips and memory lapses were also common. Where error-provoking conditions were reported, there was at least one per error. These included lack of training or experience, fatigue, stress, high workload for the prescriber and inadequate communication between healthcare professionals. Latent conditions included reluctance to question senior colleagues and inadequate provision of training. Prescribing errors are often multifactorial, with several active failures and error-provoking conditions often acting together to cause them. In the face of such complexity, solutions addressing a single cause, such as lack of knowledge, are likely to have only limited benefit. Further rigorous study, seeking potential ways of reducing error, needs to be conducted. Multifactorial interventions across many parts of the system are likely to be required.
Addressing Systematic Errors in Correlation Tracking on HMI Magnetograms
NASA Astrophysics Data System (ADS)
Mahajan, Sushant S.; Hathaway, David H.; Munoz-Jaramillo, Andres; Martens, Petrus C.
2017-08-01
Correlation tracking in solar magnetograms is an effective method to measure the differential rotation and meridional flow on the solar surface. However, since the tracking accuracy required to successfully measure meridional flow is very high, small systematic errors have a noticeable impact on measured meridional flow profiles. Additionally, the uncertainties of this kind of measurements have been historically underestimated, leading to controversy regarding flow profiles at high latitudes extracted from measurements which are unreliable near the solar limb.Here we present a set of systematic errors we have identified (and potential solutions), including bias caused by physical pixel sizes, center-to-limb systematics, and discrepancies between measurements performed using different time intervals. We have developed numerical techniques to get rid of these systematic errors and in the process improve the accuracy of the measurements by an order of magnitude.We also present a detailed analysis of uncertainties in these measurements using synthetic magnetograms and the quantification of an upper limit below which meridional flow measurements cannot be trusted as a function of latitude.
Modeling Inborn Errors of Hepatic Metabolism Using Induced Pluripotent Stem Cells.
Pournasr, Behshad; Duncan, Stephen A
2017-11-01
Inborn errors of hepatic metabolism are because of deficiencies commonly within a single enzyme as a consequence of heritable mutations in the genome. Individually such diseases are rare, but collectively they are common. Advances in genome-wide association studies and DNA sequencing have helped researchers identify the underlying genetic basis of such diseases. Unfortunately, cellular and animal models that accurately recapitulate these inborn errors of hepatic metabolism in the laboratory have been lacking. Recently, investigators have exploited molecular techniques to generate induced pluripotent stem cells from patients' somatic cells. Induced pluripotent stem cells can differentiate into a wide variety of cell types, including hepatocytes, thereby offering an innovative approach to unravel the mechanisms underlying inborn errors of hepatic metabolism. Moreover, such cell models could potentially provide a platform for the discovery of therapeutics. In this mini-review, we present a brief overview of the state-of-the-art in using pluripotent stem cells for such studies. © 2017 American Heart Association, Inc.
Theoretical and experimental errors for in situ measurements of plant water potential.
Shackel, K A
1984-07-01
Errors in psychrometrically determined values of leaf water potential caused by tissue resistance to water vapor exchange and by lack of thermal equilibrium were evaluated using commercial in situ psychrometers (Wescor Inc., Logan, UT) on leaves of Tradescantia virginiana (L.). Theoretical errors in the dewpoint method of operation for these sensors were demonstrated. After correction for these errors, in situ measurements of leaf water potential indicated substantial errors caused by tissue resistance to water vapor exchange (4 to 6% reduction in apparent water potential per second of cooling time used) resulting from humidity depletions in the psychrometer chamber during the Peltier condensation process. These errors were avoided by use of a modified procedure for dewpoint measurement. Large changes in apparent water potential were caused by leaf and psychrometer exposure to moderate levels of irradiance. These changes were correlated with relatively small shifts in psychrometer zero offsets (-0.6 to -1.0 megapascals per microvolt), indicating substantial errors caused by nonisothermal conditions between the leaf and the psychrometer. Explicit correction for these errors is not possible with the current psychrometer design.
Theoretical and Experimental Errors for In Situ Measurements of Plant Water Potential 1
Shackel, Kenneth A.
1984-01-01
Errors in psychrometrically determined values of leaf water potential caused by tissue resistance to water vapor exchange and by lack of thermal equilibrium were evaluated using commercial in situ psychrometers (Wescor Inc., Logan, UT) on leaves of Tradescantia virginiana (L.). Theoretical errors in the dewpoint method of operation for these sensors were demonstrated. After correction for these errors, in situ measurements of leaf water potential indicated substantial errors caused by tissue resistance to water vapor exchange (4 to 6% reduction in apparent water potential per second of cooling time used) resulting from humidity depletions in the psychrometer chamber during the Peltier condensation process. These errors were avoided by use of a modified procedure for dewpoint measurement. Large changes in apparent water potential were caused by leaf and psychrometer exposure to moderate levels of irradiance. These changes were correlated with relatively small shifts in psychrometer zero offsets (−0.6 to −1.0 megapascals per microvolt), indicating substantial errors caused by nonisothermal conditions between the leaf and the psychrometer. Explicit correction for these errors is not possible with the current psychrometer design. PMID:16663701
Applying lessons from social psychology to transform the culture of error disclosure.
Han, Jason; LaMarra, Denise; Vapiwala, Neha
2017-10-01
The ability to carry out prompt and effective error disclosure has been described in the literature as an essential skill among physicians that can lead to improved patient satisfaction, staff well-being and hospital outcomes. However, few studies have addressed the social psychology principles that may influence physician behaviour. The authors provide an overview of recent administrative measures designed to encourage physicians to disclose error, but note that deliberate practice, buttressed with lessons from social psychology, is needed to implement further productive behavioural changes. Two main cognitive biases that may hinder error disclosure are identified, namely: fundamental attribution error, and forecasting error. Strategies to overcome these maladaptive cognitive patterns are discussed. The authors note that interactions with standardised patients (SPs) can be used to simulate hospital encounters and help teach important behavioural considerations. Virtual reality is introduced as an immersive, realistic and easily scalable technology that can supplement traditional curricula. Lastly, the authors highlight the importance of establishing a professional standard of competence, potentially by incorporating difficult patient encounters, including disclosure of error, into medical licensing examinations that assess clinical skills. Existing curricula that cover physician error disclosure may benefit from reviewing the social psychology literature. These lessons, incorporated into SP programmes and emerging technological platforms, may improve training and evaluative methods for all medical trainees. © 2017 John Wiley & Sons Ltd and The Association for the Study of Medical Education.
Psychrometric Measurement of Leaf Water Potential: Lack of Error Attributable to Leaf Permeability.
Barrs, H D
1965-07-02
A report that low permeability could cause gross errors in psychrometric determinations of water potential in leaves has not been confirmed. No measurable error from this source could be detected for either of two types of thermocouple psychrometer tested on four species, each at four levels of water potential. No source of error other than tissue respiration could be demonstrated.
[Application of root cause analysis in healthcare].
Hsu, Tsung-Fu
2007-12-01
The main purpose of this study was to explore various aspects of root cause analysis (RCA), including its definition, rationale concept, main objective, implementation procedures, most common analysis methodology (fault tree analysis, FTA), and advantages and methodologic limitations in regard to healthcare. Several adverse events that occurred at a certain hospital were also analyzed by the author using FTA as part of this study. RCA is a process employed to identify basic and contributing causal factors underlying performance variations associated with adverse events. The rationale concept of RCA offers a systemic approach to improving patient safety that does not assign blame or liability to individuals. The four-step process involved in conducting an RCA includes: RCA preparation, proximate cause identification, root cause identification, and recommendation generation and implementation. FTA is a logical, structured process that can help identify potential causes of system failure before actual failures occur. Some advantages and significant methodologic limitations of RCA were discussed. Finally, we emphasized that errors stem principally from faults attributable to system design, practice guidelines, work conditions, and other human factors, which induce health professionals to make negligence or mistakes with regard to healthcare. We must explore the root causes of medical errors to eliminate potential RCA system failure factors. Also, a systemic approach is needed to resolve medical errors and move beyond a current culture centered on assigning fault to individuals. In constructing a real environment of patient-centered safety healthcare, we can help encourage clients to accept state-of-the-art healthcare services.
Yelland, Lisa N; Kahan, Brennan C; Dent, Elsa; Lee, Katherine J; Voysey, Merryn; Forbes, Andrew B; Cook, Jonathan A
2018-06-01
Background/aims In clinical trials, it is not unusual for errors to occur during the process of recruiting, randomising and providing treatment to participants. For example, an ineligible participant may inadvertently be randomised, a participant may be randomised in the incorrect stratum, a participant may be randomised multiple times when only a single randomisation is permitted or the incorrect treatment may inadvertently be issued to a participant at randomisation. Such errors have the potential to introduce bias into treatment effect estimates and affect the validity of the trial, yet there is little motivation for researchers to report these errors and it is unclear how often they occur. The aim of this study is to assess the prevalence of recruitment, randomisation and treatment errors and review current approaches for reporting these errors in trials published in leading medical journals. Methods We conducted a systematic review of individually randomised, phase III, randomised controlled trials published in New England Journal of Medicine, Lancet, Journal of the American Medical Association, Annals of Internal Medicine and British Medical Journal from January to March 2015. The number and type of recruitment, randomisation and treatment errors that were reported and how they were handled were recorded. The corresponding authors were contacted for a random sample of trials included in the review and asked to provide details on unreported errors that occurred during their trial. Results We identified 241 potentially eligible articles, of which 82 met the inclusion criteria and were included in the review. These trials involved a median of 24 centres and 650 participants, and 87% involved two treatment arms. Recruitment, randomisation or treatment errors were reported in 32 in 82 trials (39%) that had a median of eight errors. The most commonly reported error was ineligible participants inadvertently being randomised. No mention of recruitment, randomisation or treatment errors was found in the remaining 50 of 82 trials (61%). Based on responses from 9 of the 15 corresponding authors who were contacted regarding recruitment, randomisation and treatment errors, between 1% and 100% of the errors that occurred in their trials were reported in the trial publications. Conclusion Recruitment, randomisation and treatment errors are common in individually randomised, phase III trials published in leading medical journals, but reporting practices are inadequate and reporting standards are needed. We recommend researchers report all such errors that occurred during the trial and describe how they were handled in trial publications to improve transparency in reporting of clinical trials.
Chen, Haiyang; Teng, Yanguo; Chen, Ruihui; Li, Jiao; Wang, Jinsheng
2016-08-01
Due to their toxicity and bioaccumulation, trace metals in soils can result in a wide range of toxic effects on animals, plants, microbes, and even humans. Recognizing the contamination characteristics of soil metals and especially apportioning their potential sources are the necessary preconditions for pollution prevention and control. Over the past decades, several receptor models have been developed for source apportionment. Among them, positive matrix factorization (PMF) has gained popularity and was recommended by the US Environmental Protection Agency as a general modeling tool. In this study, an extended chemometrics model, multivariate curve resolution-alternating least squares based on maximum likelihood principal component analysis (MCR-ALS/MLPCA), was proposed for source apportionment of soil metals and applied to identify the potential sources of trace metals in soils around Miyun Reservoir. Similar to PMF, the MCR-ALS/MLPCA model can incorporate measurement error information and non-negativity constraints in its calculation procedures. Model validation with synthetic dataset suggested that the MCR-ALS/MLPCA could extract acceptable recovered source profiles even considering relatively larger error levels. When applying to identify the sources of trace metals in soils around Miyun Reservoir, the MCR-ALS/MLPCA model obtained the highly similar profiles with PMF. On the other hand, the assessment results of contamination status showed that the soils around reservoir were polluted by trace metals in slightly moderate degree but potentially posed acceptable risks to the public. Mining activities, fertilizers and agrochemicals, and atmospheric deposition were identified as the potential anthropogenic sources with contributions of 24.8, 14.6, and 13.3 %, respectively. In order to protect the drinking water source of Beijing, special attention should be paid to the metal inputs to soils from mining and agricultural activities.
Understanding Risk Tolerance and Building an Effective Safety Culture
NASA Technical Reports Server (NTRS)
Loyd, David
2018-01-01
Estimates range from 65-90 percent of catastrophic mishaps are due to human error. NASA's human factors-related mishaps causes are estimated at approximately 75 percent. As much as we'd like to error-proof our work environment, even the most automated and complex technical endeavors require human interaction... and are vulnerable to human frailty. Industry and government are focusing not only on human factors integration into hazardous work environments, but also looking for practical approaches to cultivating a strong Safety Culture that diminishes risk. Industry and government organizations have recognized the value of monitoring leading indicators to identify potential risk vulnerabilities. NASA has adapted this approach to assess risk controls associated with hazardous, critical, and complex facilities. NASA's facility risk assessments integrate commercial loss control, OSHA (Occupational Safety and Health Administration) Process Safety, API (American Petroleum Institute) Performance Indicator Standard, and NASA Operational Readiness Inspection concepts to identify risk control vulnerabilities.
NASA Astrophysics Data System (ADS)
Zhong, Xuemin; Liu, Hongqi; Mao, Xinyong; Li, Bin; He, Songping; Peng, Fangyu
2018-05-01
Large multi-axis propeller-measuring machines have two types of geometric error, position-independent geometric errors (PIGEs) and position-dependent geometric errors (PDGEs), which both have significant effects on the volumetric error of the measuring tool relative to the worktable. This paper focuses on modeling, identifying and compensating for the volumetric error of the measuring machine. A volumetric error model in the base coordinate system is established based on screw theory considering all the geometric errors. In order to fully identify all the geometric error parameters, a new method for systematic measurement and identification is proposed. All the PIGEs of adjacent axes and the six PDGEs of the linear axes are identified with a laser tracker using the proposed model. Finally, a volumetric error compensation strategy is presented and an inverse kinematic solution for compensation is proposed. The final measuring and compensation experiments have further verified the efficiency and effectiveness of the measuring and identification method, indicating that the method can be used in volumetric error compensation for large machine tools.
Littel, Marianne; van den Berg, Ivo; Luijten, Maartje; van Rooij, Antonius J; Keemink, Lianne; Franken, Ingmar H A
2012-09-01
Excessive computer gaming has recently been proposed as a possible pathological illness. However, research on this topic is still in its infancy and underlying neurobiological mechanisms have not yet been identified. The determination of underlying mechanisms of excessive gaming might be useful for the identification of those at risk, a better understanding of the behavior and the development of interventions. Excessive gaming has been often compared with pathological gambling and substance use disorder. Both disorders are characterized by high levels of impulsivity, which incorporates deficits in error processing and response inhibition. The present study aimed to investigate error processing and response inhibition in excessive gamers and controls using a Go/NoGo paradigm combined with event-related potential recordings. Results indicated that excessive gamers show reduced error-related negativity amplitudes in response to incorrect trials relative to correct trials, implying poor error processing in this population. Furthermore, excessive gamers display higher levels of self-reported impulsivity as well as more impulsive responding as reflected by less behavioral inhibition on the Go/NoGo task. The present study indicates that excessive gaming partly parallels impulse control and substance use disorders regarding impulsivity measured on the self-reported, behavioral and electrophysiological level. Although the present study does not allow drawing firm conclusions on causality, it might be that trait impulsivity, poor error processing and diminished behavioral response inhibition underlie the excessive gaming patterns observed in certain individuals. They might be less sensitive to negative consequences of gaming and therefore continue their behavior despite adverse consequences. © 2012 The Authors, Addiction Biology © 2012 Society for the Study of Addiction.
Beste, Christian; Mückschel, Moritz; Elben, Saskia; J Hartmann, Christian; McIntyre, Cameron C; Saft, Carsten; Vesper, Jan; Schnitzler, Alfons; Wojtecki, Lars
2015-07-01
Deep brain stimulation of the dorsal pallidum (globus pallidus, GP) is increasingly considered as a surgical therapeutic option in Huntington's disease (HD), but there is need to identify outcome measures useful for clinical trials. Computational models consider the GP to be part of a basal ganglia network involved in cognitive processes related to the control of actions. We examined behavioural and event-related potential (ERP) correlates of action control (i.e., error monitoring) and evaluated the effects of deep brain stimulation (DBS). We did this using a standard flanker paradigm and evaluated error-related ERPs. Patients were recruited from a prospective pilot trial for pallidal DBS in HD (trial number NCT00902889). From the initial four patients with Huntington's chorea, two patients with chronic external dorsal pallidum stimulation were available for follow-up and able to perform the task. The results suggest that the external GP constitutes an important basal ganglia element not only for error processing and behavioural adaptation but for general response monitoring processes as well. Response monitoring functions were fully controllable by switching pallidal DBS stimulation on and off. When stimulation was switched off, no neurophysiological and behavioural signs of error and general performance monitoring, as reflected by the error-related negativity and post-error slowing in reaction times were evident. The modulation of response monitoring processes by GP-DBS reflects a side effect of efforts to alleviate motor symptoms in HD. From a clinical neurological perspective, the results suggest that DBS in the external GP segment can be regarded as a potentially beneficial treatment with respect to cognitive functions.
The next organizational challenge: finding and addressing diagnostic error.
Graber, Mark L; Trowbridge, Robert; Myers, Jennifer S; Umscheid, Craig A; Strull, William; Kanter, Michael H
2014-03-01
Although health care organizations (HCOs) are intensely focused on improving the safety of health care, efforts to date have almost exclusively targeted treatment-related issues. The literature confirms that the approaches HCOs use to identify adverse medical events are not effective in finding diagnostic errors, so the initial challenge is to identify cases of diagnostic error. WHY HEALTH CARE ORGANIZATIONS NEED TO GET INVOLVED: HCOs are preoccupied with many quality- and safety-related operational and clinical issues, including performance measures. The case for paying attention to diagnostic errors, however, is based on the following four points: (1) diagnostic errors are common and harmful, (2) high-quality health care requires high-quality diagnosis, (3) diagnostic errors are costly, and (4) HCOs are well positioned to lead the way in reducing diagnostic error. FINDING DIAGNOSTIC ERRORS: Current approaches to identifying diagnostic errors, such as occurrence screens, incident reports, autopsy, and peer review, were not designed to detect diagnostic issues (or problems of omission in general) and/or rely on voluntary reporting. The realization that the existing tools are inadequate has spurred efforts to identify novel tools that could be used to discover diagnostic errors or breakdowns in the diagnostic process that are associated with errors. New approaches--Maine Medical Center's case-finding of diagnostic errors by facilitating direct reports from physicians and Kaiser Permanente's electronic health record--based reports that detect process breakdowns in the followup of abnormal findings--are described in case studies. By raising awareness and implementing targeted programs that address diagnostic error, HCOs may begin to play an important role in addressing the problem of diagnostic error.
ERIC Educational Resources Information Center
Boll, Christina; Leppin, Julian Sebastian; Schömann, Klaus
2016-01-01
Overeducation potentially signals a productivity loss. With Socio-Economic Panel data from 1984 to 2011 we identify drivers of educational mismatch for East and West medium and highly educated Germans. Addressing measurement error, state dependence and unobserved heterogeneity, we run dynamic mixed multinomial logit models for three different…
Multiple Intravenous Infusions Phase 1b
Cassano-Piché, A; Fan, M; Sabovitch, S; Masino, C; Easty, AC
2012-01-01
Background Minimal research has been conducted into the potential patient safety issues related to administering multiple intravenous (IV) infusions to a single patient. Previous research has highlighted that there are a number of related safety risks. In Phase 1a of this study, an analysis of 2 national incident-reporting databases (Institute for Safe Medical Practices Canada and United States Food and Drug Administration MAUDE) found that a high percentage of incidents associated with the administration of multiple IV infusions resulted in patient harm. Objectives The primary objectives of Phase 1b of this study were to identify safety issues with the potential to cause patient harm stemming from the administration of multiple IV infusions; and to identify how nurses are being educated on key principles required to safely administer multiple IV infusions. Data Sources and Review Methods A field study was conducted at 12 hospital clinical units (sites) across Ontario, and telephone interviews were conducted with program coordinators or instructors from both the Ontario baccalaureate nursing degree programs and the Ontario postgraduate Critical Care Nursing Certificate programs. Data were analyzed using Rasmussen’s 1997 Risk Management Framework and a Health Care Failure Modes and Effects Analysis. Results Twenty-two primary patient safety issues were identified with the potential to directly cause patient harm. Seventeen of these (critical issues) were categorized into 6 themes. A cause-consequence tree was established to outline all possible contributing factors for each critical issue. Clinical recommendations were identified for immediate distribution to, and implementation by, Ontario hospitals. Future investigation efforts were planned for Phase 2 of the study. Limitations This exploratory field study identifies the potential for errors, but does not describe the direct observation of such errors, except in a few cases where errors were observed. Not all issues are known in advance, and the frequency of errors is too low to be observed in the time allotted and with the limited sample of observations. Conclusions The administration of multiple IV infusions to a single patient is a complex task with many potential associated patient safety risks. Improvements to infusion and infusion-related technology, education standards, clinical best practice guidelines, hospital policies, and unit work practices are required to reduce the risk potential. This report makes several recommendations to Ontario hospitals so that they can develop an awareness of the issues highlighted in this report and minimize some of the risks. Further investigation of mitigating strategies is required and will be undertaken in Phase 2 of this research. Plain Language Summary Patients, particularly in critical care environments, often require multiple intravenous (IV) medications via large volumetric or syringe infusion pumps. The infusion of multiple IV medications is not without risk; unintended errors during these complex procedures have resulted in patient harm. However, the range of associated risks and the factors contributing to these risks are not well understood. Health Quality Ontario’s Ontario Health Technology Advisory Committee commissioned the Health Technology Safety Research Team at the University Health Network to conduct a multi-phase study to identify and mitigate the risks associated with multiple IV infusions. Some of the questions addressed by the team were as follows: What is needed to reduce the risk of errors for individuals who are receiving a lot of medications? What strategies work best? The initial report, Multiple Intravenous Infusions Phase 1a: Situation Scan Summary Report, summarizes the interim findings based on a literature review, an incident database review, and a technology scan. The Health Technology Safety Research Team worked in close collaboration with the Institute for Safe Medication Practices Canada on an exploratory study to understand the risks associated with multiple IV infusions and the degree to which nurses are educated to help mitigate them. The current report, Multiple Intravenous Infusions Phase 1b: Practice and Training Scan, presents the findings of a field study of 12 hospital clinical units across Ontario, as well as 13 interviews with educators from baccalaureate-level nursing degree programs and postgraduate Critical Care Nursing Certificate programs. It makes 9 recommendations that emphasize best practices for the administration of multiple IV infusions and pertain to secondary infusions, line identification, line set-up and removal, and administering IV bolus medications. The Health Technology Safety Research Team has also produced an associated report for hospitals entitled Mitigating the Risks Associated With Multiple IV Infusions: Recommendations Based on a Field Study of Twelve Ontario Hospitals, which highlights the 9 interim recommendations and provides a brief rationale for each one. PMID:23074426
Safe Practices for Copy and Paste in the EHR
Lehmann, Christoph U.; Michel, Jeremy; Solomon, Ronni; Possanza, Lorraine; Gandhi, Tejal
2017-01-01
Summary Background Copy and paste functionality can support efficiency during clinical documentation, but may promote inaccurate documentation with risks for patient safety. The Partnership for Health IT Patient Safety was formed to gather data, conduct analysis, educate, and disseminate safe practices for safer care using health information technology (IT). Objective To characterize copy and paste events in clinical care, identify safety risks, describe existing evidence, and develop implementable practice recommendations for safe reuse of information via copy and paste. Methods The Partnership 1) reviewed 12 reported safety events, 2) solicited expert input, and 3) performed a systematic literature review (2010 to January 2015) to identify publications addressing frequency, perceptions/attitudes, patient safety risks, existing guidance, and potential interventions and mitigation practices. Results The literature review identified 51 publications that were included. Overall, 66% to 90% of clinicians routinely use copy and paste. One study of diagnostic errors found that copy and paste led to 2.6% of errors in which a missed diagnosis required patients to seek additional unplanned care. Copy and paste can promote note bloat, internal inconsistencies, error propagation, and documentation in the wrong patient chart. Existing guidance identified specific responsibilities for authors, organizations, and electronic health record (EHR) developers. Analysis of 12 reported copy and paste safety events was congruent with problems identified from the literature review. Conclusion Despite regular copy and paste use, evidence regarding direct risk to patient safety remains sparse, with significant study limitations. Drawing on existing evidence, the Partnership developed four safe practice recommendations: 1) Provide a mechanism to make copy and paste material easily identifiable; 2) Ensure the provenance of copy and paste material is readily available; 3) Ensure adequate staff training and education; 4) Ensure copy and paste practices are regularly monitored, measured, and assessed. PMID:28074211
Tsou, Amy Y; Lehmann, Christoph U; Michel, Jeremy; Solomon, Ronni; Possanza, Lorraine; Gandhi, Tejal
2017-01-11
Copy and paste functionality can support efficiency during clinical documentation, but may promote inaccurate documentation with risks for patient safety. The Partnership for Health IT Patient Safety was formed to gather data, conduct analysis, educate, and disseminate safe practices for safer care using health information technology (IT). To characterize copy and paste events in clinical care, identify safety risks, describe existing evidence, and develop implementable practice recommendations for safe reuse of information via copy and paste. The Partnership 1) reviewed 12 reported safety events, 2) solicited expert input, and 3) performed a systematic literature review (2010 to January 2015) to identify publications addressing frequency, perceptions/attitudes, patient safety risks, existing guidance, and potential interventions and mitigation practices. The literature review identified 51 publications that were included. Overall, 66% to 90% of clinicians routinely use copy and paste. One study of diagnostic errors found that copy and paste led to 2.6% of errors in which a missed diagnosis required patients to seek additional unplanned care. Copy and paste can promote note bloat, internal inconsistencies, error propagation, and documentation in the wrong patient chart. Existing guidance identified specific responsibilities for authors, organizations, and electronic health record (EHR) developers. Analysis of 12 reported copy and paste safety events was congruent with problems identified from the literature review. Despite regular copy and paste use, evidence regarding direct risk to patient safety remains sparse, with significant study limitations. Drawing on existing evidence, the Partnership developed four safe practice recommendations: 1) Provide a mechanism to make copy and paste material easily identifiable; 2) Ensure the provenance of copy and paste material is readily available; 3) Ensure adequate staff training and education; 4) Ensure copy and paste practices are regularly monitored, measured, and assessed.
Electron Beam Propagation Through a Magnetic Wiggler with Random Field Errors
1989-08-21
Another quantity of interest is the vector potential 6.A,.(:) associated with the field error 6B,,,(:). Defining the normalized vector potentials ba = ebA...then follows that the correlation of the normalized vector potential errors is given by 1 . 12 (-a.(zj)a.,(z2)) = a,k,, dz’ , dz" (bBE(z’)bB , (z")) a2...Throughout the following, terms of order O(z:/z) will be neglected. Similarly, for the y-component of the normalized vector potential errors, one
Detection and correction of prescription errors by an emergency department pharmacy service.
Stasiak, Philip; Afilalo, Marc; Castelino, Tanya; Xue, Xiaoqing; Colacone, Antoinette; Soucy, Nathalie; Dankoff, Jerrald
2014-05-01
Emergency departments (EDs) are recognized as a high-risk setting for prescription errors. Pharmacist involvement may be important in reviewing prescriptions to identify and correct errors. The objectives of this study were to describe the frequency and type of prescription errors detected by pharmacists in EDs, determine the proportion of errors that could be corrected, and identify factors associated with prescription errors. This prospective observational study was conducted in a tertiary care teaching ED on 25 consecutive weekdays. Pharmacists reviewed all documented prescriptions and flagged and corrected errors for patients in the ED. We collected information on patient demographics, details on prescription errors, and the pharmacists' recommendations. A total of 3,136 ED prescriptions were reviewed. The proportion of prescriptions in which a pharmacist identified an error was 3.2% (99 of 3,136; 95% confidence interval [CI] 2.5-3.8). The types of identified errors were wrong dose (28 of 99, 28.3%), incomplete prescription (27 of 99, 27.3%), wrong frequency (15 of 99, 15.2%), wrong drug (11 of 99, 11.1%), wrong route (1 of 99, 1.0%), and other (17 of 99, 17.2%). The pharmacy service intervened and corrected 78 (78 of 99, 78.8%) errors. Factors associated with prescription errors were patient age over 65 (odds ratio [OR] 2.34; 95% CI 1.32-4.13), prescriptions with more than one medication (OR 5.03; 95% CI 2.54-9.96), and those written by emergency medicine residents compared to attending emergency physicians (OR 2.21, 95% CI 1.18-4.14). Pharmacists in a tertiary ED are able to correct the majority of prescriptions in which they find errors. Errors are more likely to be identified in prescriptions written for older patients, those containing multiple medication orders, and those prescribed by emergency residents.
Response Error in Reporting Dental Coverage by Older Americans in the Health and Retirement Study
Manski, Richard J.; Mathiowetz, Nancy A.; Campbell, Nancy; Pepper, John V.
2014-01-01
The aim of this research was to analyze the inconsistency in responses to survey questions within the Health and Retirement Study (HRS) regarding insurance coverage of dental services. Self-reports of dental coverage in the dental services section were compared with those in the insurance section of the 2002 HRS to identify inconsistent responses. Logistic regression identified characteristics of persons reporting discrepancies and assessed the effect of measurement error on dental coverage coefficient estimates in dental utilization models. In 18% of cases, data reported in the insurance section contradicted data reported in the dental use section of the HRS by those who said insurance at least partially covered (or would have covered) their (hypothetical) dental use. Additional findings included distinct characteristics of persons with potential reporting errors and a downward bias to the regression coefficient for coverage in a dental use model without controls for inconsistent self-reports of coverage. This study offers evidence for the need to validate self-reports of dental insurance coverage among a survey population of older Americans to obtain more accurate estimates of coverage and its impact on dental utilization. PMID:25428430
Stanton, Neville A; Harvey, Catherine
2017-02-01
Risk assessments in Sociotechnical Systems (STS) tend to be based on error taxonomies, yet the term 'human error' does not sit easily with STS theories and concepts. A new break-link approach was proposed as an alternative risk assessment paradigm to reveal the effect of information communication failures between agents and tasks on the entire STS. A case study of the training of a Royal Navy crew detecting a low flying Hawk (simulating a sea-skimming missile) is presented using EAST to model the Hawk-Frigate STS in terms of social, information and task networks. By breaking 19 social links and 12 task links, 137 potential risks were identified. Discoveries included revealing the effect of risk moving around the system; reducing the risks to the Hawk increased the risks to the Frigate. Future research should examine the effects of compounded information communication failures on STS performance. Practitioner Summary: The paper presents a step-by-step walk-through of EAST to show how it can be used for risk assessment in sociotechnical systems. The 'broken-links' method takes a systemic, rather than taxonomic, approach to identify information communication failures in social and task networks.
Bohil, Corey J; Higgins, Nicholas A; Keebler, Joseph R
2014-01-01
We compared methods for predicting and understanding the source of confusion errors during military vehicle identification training. Participants completed training to identify main battle tanks. They also completed card-sorting and similarity-rating tasks to express their mental representation of resemblance across the set of training items. We expected participants to selectively attend to a subset of vehicle features during these tasks, and we hypothesised that we could predict identification confusion errors based on the outcomes of the card-sort and similarity-rating tasks. Based on card-sorting results, we were able to predict about 45% of observed identification confusions. Based on multidimensional scaling of the similarity-rating data, we could predict more than 80% of identification confusions. These methods also enabled us to infer the dimensions receiving significant attention from each participant. This understanding of mental representation may be crucial in creating personalised training that directs attention to features that are critical for accurate identification. Participants completed military vehicle identification training and testing, along with card-sorting and similarity-rating tasks. The data enabled us to predict up to 84% of identification confusion errors and to understand the mental representation underlying these errors. These methods have potential to improve training and reduce identification errors leading to fratricide.
Utilizing measure-based feedback in control-mastery theory: A clinical error.
Snyder, John; Aafjes-van Doorn, Katie
2016-09-01
Clinical errors and ruptures are an inevitable part of clinical practice. Often times, therapists are unaware that a clinical error or rupture has occurred, leaving no space for repair, and potentially leading to patient dropout and/or less effective treatment. One way to overcome our blind spots is by frequently and systematically collecting measure-based feedback from the patient. Patient feedback measures that focus on the process of psychotherapy such as the Patient's Experience of Attunement and Responsiveness scale (PEAR) can be used in conjunction with treatment outcome measures such as the Outcome Questionnaire 45.2 (OQ-45.2) to monitor the patient's therapeutic experience and progress. The regular use of these types of measures can aid clinicians in the identification of clinical errors and the associated patient deterioration that might otherwise go unnoticed and unaddressed. The current case study describes an instance of clinical error that occurred during the 2-year treatment of a highly traumatized young woman. The clinical error was identified using measure-based feedback and subsequently understood and addressed from the theoretical standpoint of the control-mastery theory of psychotherapy. An alternative hypothetical response is also presented and explained using control-mastery theory. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Effect of bar-code technology on the safety of medication administration.
Poon, Eric G; Keohane, Carol A; Yoon, Catherine S; Ditmore, Matthew; Bane, Anne; Levtzion-Korach, Osnat; Moniz, Thomas; Rothschild, Jeffrey M; Kachalia, Allen B; Hayes, Judy; Churchill, William W; Lipsitz, Stuart; Whittemore, Anthony D; Bates, David W; Gandhi, Tejal K
2010-05-06
Serious medication errors are common in hospitals and often occur during order transcription or administration of medication. To help prevent such errors, technology has been developed to verify medications by incorporating bar-code verification technology within an electronic medication-administration system (bar-code eMAR). We conducted a before-and-after, quasi-experimental study in an academic medical center that was implementing the bar-code eMAR. We assessed rates of errors in order transcription and medication administration on units before and after implementation of the bar-code eMAR. Errors that involved early or late administration of medications were classified as timing errors and all others as nontiming errors. Two clinicians reviewed the errors to determine their potential to harm patients and classified those that could be harmful as potential adverse drug events. We observed 14,041 medication administrations and reviewed 3082 order transcriptions. Observers noted 776 nontiming errors in medication administration on units that did not use the bar-code eMAR (an 11.5% error rate) versus 495 such errors on units that did use it (a 6.8% error rate)--a 41.4% relative reduction in errors (P<0.001). The rate of potential adverse drug events (other than those associated with timing errors) fell from 3.1% without the use of the bar-code eMAR to 1.6% with its use, representing a 50.8% relative reduction (P<0.001). The rate of timing errors in medication administration fell by 27.3% (P<0.001), but the rate of potential adverse drug events associated with timing errors did not change significantly. Transcription errors occurred at a rate of 6.1% on units that did not use the bar-code eMAR but were completely eliminated on units that did use it. Use of the bar-code eMAR substantially reduced the rate of errors in order transcription and in medication administration as well as potential adverse drug events, although it did not eliminate such errors. Our data show that the bar-code eMAR is an important intervention to improve medication safety. (ClinicalTrials.gov number, NCT00243373.) 2010 Massachusetts Medical Society
Utilizing LANDSAT imagery to monitor land-use change - A case study in Ohio
NASA Technical Reports Server (NTRS)
Gordon, S. I.
1980-01-01
A study, performed in Ohio, of the nature and extent of interpretation errors in the application of Landsat imagery to land-use planning and modeling is reported. Potential errors associated with the misalignment of pixels after geometric correction and with misclassification of land cover or land use due to spectral similarities were identified on interpreted computer-compatible tapes of a portion of Franklin County for two adjacent days of 1975 and one day of 1973, and the extents of these errors were quantified by comparison with a ground-checked set of aerial-photograph interpretations. The open-space and agricultural categories are found to be the most consistently classified, while the more urban areas were classified correctly only from about 43 to 8% of the time. It is thus recommended that the direct application of Landsat data to land-use planning must await improvements in classification techniques and accuracy.
Using video recording to identify management errors in pediatric trauma resuscitation.
Oakley, Ed; Stocker, Sergio; Staubli, Georg; Young, Simon
2006-03-01
To determine the ability of video recording to identify management errors in trauma resuscitation and to compare this method with medical record review. The resuscitation of children who presented to the emergency department of the Royal Children's Hospital between February 19, 2001, and August 18, 2002, for whom the trauma team was activated was video recorded. The tapes were analyzed, and management was compared with Advanced Trauma Life Support guidelines. Deviations from these guidelines were recorded as errors. Fifty video recordings were analyzed independently by 2 reviewers. Medical record review was undertaken for a cohort of the most seriously injured patients, and errors were identified. The errors detected with the 2 methods were compared. Ninety resuscitations were video recorded and analyzed. An average of 5.9 errors per resuscitation was identified with this method (range: 1-12 errors). Twenty-five children (28%) had an injury severity score of >11; there was an average of 2.16 errors per patient in this group. Only 10 (20%) of these errors were detected in the medical record review. Medical record review detected an additional 8 errors that were not evident on the video recordings. Concordance between independent reviewers was high, with 93% agreement. Video recording is more effective than medical record review in detecting management errors in pediatric trauma resuscitation. Management errors in pediatric trauma resuscitation are common and often involve basic resuscitation principles. Resuscitation of the most seriously injured children was associated with fewer errors. Video recording is a useful adjunct to trauma resuscitation auditing.
The Lung Image Database Consortium (LIDC): ensuring the integrity of expert-defined "truth".
Armato, Samuel G; Roberts, Rachael Y; McNitt-Gray, Michael F; Meyer, Charles R; Reeves, Anthony P; McLennan, Geoffrey; Engelmann, Roger M; Bland, Peyton H; Aberle, Denise R; Kazerooni, Ella A; MacMahon, Heber; van Beek, Edwin J R; Yankelevitz, David; Croft, Barbara Y; Clarke, Laurence P
2007-12-01
Computer-aided diagnostic (CAD) systems fundamentally require the opinions of expert human observers to establish "truth" for algorithm development, training, and testing. The integrity of this "truth," however, must be established before investigators commit to this "gold standard" as the basis for their research. The purpose of this study was to develop a quality assurance (QA) model as an integral component of the "truth" collection process concerning the location and spatial extent of lung nodules observed on computed tomography (CT) scans to be included in the Lung Image Database Consortium (LIDC) public database. One hundred CT scans were interpreted by four radiologists through a two-phase process. For the first of these reads (the "blinded read phase"), radiologists independently identified and annotated lesions, assigning each to one of three categories: "nodule >or=3 mm," "nodule <3 mm," or "non-nodule >or=3 mm." For the second read (the "unblinded read phase"), the same radiologists independently evaluated the same CT scans, but with all of the annotations from the previously performed blinded reads presented; each radiologist could add to, edit, or delete their own marks; change the lesion category of their own marks; or leave their marks unchanged. The post-unblinded read set of marks was grouped into discrete nodules and subjected to the QA process, which consisted of identification of potential errors introduced during the complete image annotation process and correction of those errors. Seven categories of potential error were defined; any nodule with a mark that satisfied the criterion for one of these categories was referred to the radiologist who assigned that mark for either correction or confirmation that the mark was intentional. A total of 105 QA issues were identified across 45 (45.0%) of the 100 CT scans. Radiologist review resulted in modifications to 101 (96.2%) of these potential errors. Twenty-one lesions erroneously marked as lung nodules after the unblinded reads had this designation removed through the QA process. The establishment of "truth" must incorporate a QA process to guarantee the integrity of the datasets that will provide the basis for the development, training, and testing of CAD systems.
Measurement error in environmental epidemiology and the shape of exposure-response curves.
Rhomberg, Lorenz R; Chandalia, Juhi K; Long, Christopher M; Goodman, Julie E
2011-09-01
Both classical and Berkson exposure measurement errors as encountered in environmental epidemiology data can result in biases in fitted exposure-response relationships that are large enough to affect the interpretation and use of the apparent exposure-response shapes in risk assessment applications. A variety of sources of potential measurement error exist in the process of estimating individual exposures to environmental contaminants, and the authors review the evaluation in the literature of the magnitudes and patterns of exposure measurement errors that prevail in actual practice. It is well known among statisticians that random errors in the values of independent variables (such as exposure in exposure-response curves) may tend to bias regression results. For increasing curves, this effect tends to flatten and apparently linearize what is in truth a steeper and perhaps more curvilinear or even threshold-bearing relationship. The degree of bias is tied to the magnitude of the measurement error in the independent variables. It has been shown that the degree of bias known to apply to actual studies is sufficient to produce a false linear result, and that although nonparametric smoothing and other error-mitigating techniques may assist in identifying a threshold, they do not guarantee detection of a threshold. The consequences of this could be great, as it could lead to a misallocation of resources towards regulations that do not offer any benefit to public health.
NASA Astrophysics Data System (ADS)
Saad, Katherine M.; Wunch, Debra; Deutscher, Nicholas M.; Griffith, David W. T.; Hase, Frank; De Mazière, Martine; Notholt, Justus; Pollard, David F.; Roehl, Coleen M.; Schneider, Matthias; Sussmann, Ralf; Warneke, Thorsten; Wennberg, Paul O.
2016-11-01
Global and regional methane budgets are markedly uncertain. Conventionally, estimates of methane sources are derived by bridging emissions inventories with atmospheric observations employing chemical transport models. The accuracy of this approach requires correctly simulating advection and chemical loss such that modeled methane concentrations scale with surface fluxes. When total column measurements are assimilated into this framework, modeled stratospheric methane introduces additional potential for error. To evaluate the impact of such errors, we compare Total Carbon Column Observing Network (TCCON) and GEOS-Chem total and tropospheric column-averaged dry-air mole fractions of methane. We find that the model's stratospheric contribution to the total column is insensitive to perturbations to the seasonality or distribution of tropospheric emissions or loss. In the Northern Hemisphere, we identify disagreement between the measured and modeled stratospheric contribution, which increases as the tropopause altitude decreases, and a temporal phase lag in the model's tropospheric seasonality driven by transport errors. Within the context of GEOS-Chem, we find that the errors in tropospheric advection partially compensate for the stratospheric methane errors, masking inconsistencies between the modeled and measured tropospheric methane. These seasonally varying errors alias into source attributions resulting from model inversions. In particular, we suggest that the tropospheric phase lag error leads to large misdiagnoses of wetland emissions in the high latitudes of the Northern Hemisphere.
Analyzing communication errors in an air medical transport service.
Dalto, Joseph D; Weir, Charlene; Thomas, Frank
2013-01-01
Poor communication can result in adverse events. Presently, no standards exist for classifying and analyzing air medical communication errors. This study sought to determine the frequency and types of communication errors reported within an air medical quality and safety assurance reporting system. Of 825 quality assurance reports submitted in 2009, 278 were randomly selected and analyzed for communication errors. Each communication error was classified and mapped to Clark's communication level hierarchy (ie, levels 1-4). Descriptive statistics were performed, and comparisons were evaluated using chi-square analysis. Sixty-four communication errors were identified in 58 reports (21% of 278). Of the 64 identified communication errors, only 18 (28%) were classified by the staff to be communication errors. Communication errors occurred most often at level 1 (n = 42/64, 66%) followed by level 4 (21/64, 33%). Level 2 and 3 communication failures were rare (, 1%). Communication errors were found in a fifth of quality and safety assurance reports. The reporting staff identified less than a third of these errors. Nearly all communication errors (99%) occurred at either the lowest level of communication (level 1, 66%) or the highest level (level 4, 33%). An air medical communication ontology is necessary to improve the recognition and analysis of communication errors. Copyright © 2013 Air Medical Journal Associates. Published by Elsevier Inc. All rights reserved.
Roche, Erin A.; Dovichin, Colin M.; Arnold, Todd W.
2014-01-01
Implicit assumptions for most mark-recapture studies are that individuals do not lose their markers and all observed markers are correctly recorded. If these assumptions are violated, e.g., due to loss or extreme wear of markers, estimates of population size and vital rates will be biased. Double-marking experiments have been widely used to estimate rates of marker loss and adjust for associated bias, and we extended this approach to estimate rates of recording errors. We double-marked 309 Piping Plovers (Charadrius melodus) with unique combinations of color bands and alphanumeric flags and used multi-state mark recapture models to estimate the frequency with which plovers were misidentified. Observers were twice as likely to read and report an invalid color-band combination (2.4% of the time) as an invalid alphanumeric code (1.0%). Observers failed to read matching band combinations or alphanumeric flag codes 4.5% of the time. Unlike previous band resighting studies, use of two resightable markers allowed us to identify when resighting errors resulted in reports of combinations or codes that were valid, but still incorrect; our results suggest this may be a largely unappreciated problem in mark-resight studies. Field-readable alphanumeric flags offer a promising auxiliary marker for identifying and potentially adjusting for false-positive resighting errors that may otherwise bias demographic estimates.
An integral formulation for wave propagation on weakly non-uniform potential flows
NASA Astrophysics Data System (ADS)
Mancini, Simone; Astley, R. Jeremy; Sinayoko, Samuel; Gabard, Gwénaël; Tournour, Michel
2016-12-01
An integral formulation for acoustic radiation in moving flows is presented. It is based on a potential formulation for acoustic radiation on weakly non-uniform subsonic mean flows. This work is motivated by the absence of suitable kernels for wave propagation on non-uniform flow. The integral solution is formulated using a Green's function obtained by combining the Taylor and Lorentz transformations. Although most conventional approaches based on either transform solve the Helmholtz problem in a transformed domain, the current Green's function and associated integral equation are derived in the physical space. A dimensional error analysis is developed to identify the limitations of the current formulation. Numerical applications are performed to assess the accuracy of the integral solution. It is tested as a means of extrapolating a numerical solution available on the outer boundary of a domain to the far field, and as a means of solving scattering problems by rigid surfaces in non-uniform flows. The results show that the error associated with the physical model deteriorates with increasing frequency and mean flow Mach number. However, the error is generated only in the domain where mean flow non-uniformities are significant and is constant in regions where the flow is uniform.
Dysfunctional error-related processing in incarcerated youth with elevated psychopathic traits
Maurer, J. Michael; Steele, Vaughn R.; Cope, Lora M.; Vincent, Gina M.; Stephen, Julia M.; Calhoun, Vince D.; Kiehl, Kent A.
2016-01-01
Adult psychopathic offenders show an increased propensity towards violence, impulsivity, and recidivism. A subsample of youth with elevated psychopathic traits represent a particularly severe subgroup characterized by extreme behavioral problems and comparable neurocognitive deficits as their adult counterparts, including perseveration deficits. Here, we investigate response-locked event-related potential (ERP) components (the error-related negativity [ERN/Ne] related to early error-monitoring processing and the error-related positivity [Pe] involved in later error-related processing) in a sample of incarcerated juvenile male offenders (n = 100) who performed a response inhibition Go/NoGo task. Psychopathic traits were assessed using the Hare Psychopathy Checklist: Youth Version (PCL:YV). The ERN/Ne and Pe were analyzed with classic windowed ERP components and principal component analysis (PCA). Using linear regression analyses, PCL:YV scores were unrelated to the ERN/Ne, but were negatively related to Pe mean amplitude. Specifically, the PCL:YV Facet 4 subscale reflecting antisocial traits emerged as a significant predictor of reduced amplitude of a subcomponent underlying the Pe identified with PCA. This is the first evidence to suggest a negative relationship between adolescent psychopathy scores and Pe mean amplitude. PMID:26930170
Yoshizaki, J.; Pollock, K.H.; Brownie, C.; Webster, R.A.
2009-01-01
Misidentification of animals is potentially important when naturally existing features (natural tags) are used to identify individual animals in a capture-recapture study. Photographic identification (photoID) typically uses photographic images of animals' naturally existing features as tags (photographic tags) and is subject to two main causes of identification errors: those related to quality of photographs (non-evolving natural tags) and those related to changes in natural marks (evolving natural tags). The conventional methods for analysis of capture-recapture data do not account for identification errors, and to do so requires a detailed understanding of the misidentification mechanism. Focusing on the situation where errors are due to evolving natural tags, we propose a misidentification mechanism and outline a framework for modeling the effect of misidentification in closed population studies. We introduce methods for estimating population size based on this model. Using a simulation study, we show that conventional estimators can seriously overestimate population size when errors due to misidentification are ignored, and that, in comparison, our new estimators have better properties except in cases with low capture probabilities (<0.2) or low misidentification rates (<2.5%). ?? 2009 by the Ecological Society of America.
Novak, Avrey; Nyflot, Matthew J; Ermoian, Ralph P; Jordan, Loucille E; Sponseller, Patricia A; Kane, Gabrielle M; Ford, Eric C; Zeng, Jing
2016-05-01
Radiation treatment planning involves a complex workflow that has multiple potential points of vulnerability. This study utilizes an incident reporting system to identify the origination and detection points of near-miss errors, in order to guide their departmental safety improvement efforts. Previous studies have examined where errors arise, but not where they are detected or applied a near-miss risk index (NMRI) to gauge severity. From 3/2012 to 3/2014, 1897 incidents were analyzed from a departmental incident learning system. All incidents were prospectively reviewed weekly by a multidisciplinary team and assigned a NMRI score ranging from 0 to 4 reflecting potential harm to the patient (no potential harm to potential critical harm). Incidents were classified by point of incident origination and detection based on a 103-step workflow. The individual steps were divided among nine broad workflow categories (patient assessment, imaging for radiation therapy (RT) planning, treatment planning, pretreatment plan review, treatment delivery, on-treatment quality management, post-treatment completion, equipment/software quality management, and other). The average NMRI scores of incidents originating or detected within each broad workflow area were calculated. Additionally, out of 103 individual process steps, 35 were classified as safety barriers, the process steps whose primary function is to catch errors. The safety barriers which most frequently detected incidents were identified and analyzed. Finally, the distance between event origination and detection was explored by grouping events by the number of broad workflow area events passed through before detection, and average NMRI scores were compared. Near-miss incidents most commonly originated within treatment planning (33%). However, the incidents with the highest average NMRI scores originated during imaging for RT planning (NMRI = 2.0, average NMRI of all events = 1.5), specifically during the documentation of patient positioning and localization of the patient. Incidents were most frequently detected during treatment delivery (30%), and incidents identified at this point also had higher severity scores than other workflow areas (NMRI = 1.6). Incidents identified during on-treatment quality management were also more severe (NMRI = 1.7), and the specific process steps of reviewing portal and CBCT images tended to catch highest-severity incidents. On average, safety barriers caught 46% of all incidents, most frequently at physics chart review, therapist's chart check, and the review of portal images; however, most of the incidents that pass through a particular safety barrier are not designed to be capable of being captured at that barrier. Incident learning systems can be used to assess the most common points of error origination and detection in radiation oncology. This can help tailor safety improvement efforts and target the highest impact portions of the workflow. The most severe near-miss events tend to originate during simulation, with the most severe near-miss events detected at the time of patient treatment. Safety barriers can be improved to allow earlier detection of near-miss events.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Novak, Avrey; Nyflot, Matthew J.; Ermoian, Ralph P.
Purpose: Radiation treatment planning involves a complex workflow that has multiple potential points of vulnerability. This study utilizes an incident reporting system to identify the origination and detection points of near-miss errors, in order to guide their departmental safety improvement efforts. Previous studies have examined where errors arise, but not where they are detected or applied a near-miss risk index (NMRI) to gauge severity. Methods: From 3/2012 to 3/2014, 1897 incidents were analyzed from a departmental incident learning system. All incidents were prospectively reviewed weekly by a multidisciplinary team and assigned a NMRI score ranging from 0 to 4 reflectingmore » potential harm to the patient (no potential harm to potential critical harm). Incidents were classified by point of incident origination and detection based on a 103-step workflow. The individual steps were divided among nine broad workflow categories (patient assessment, imaging for radiation therapy (RT) planning, treatment planning, pretreatment plan review, treatment delivery, on-treatment quality management, post-treatment completion, equipment/software quality management, and other). The average NMRI scores of incidents originating or detected within each broad workflow area were calculated. Additionally, out of 103 individual process steps, 35 were classified as safety barriers, the process steps whose primary function is to catch errors. The safety barriers which most frequently detected incidents were identified and analyzed. Finally, the distance between event origination and detection was explored by grouping events by the number of broad workflow area events passed through before detection, and average NMRI scores were compared. Results: Near-miss incidents most commonly originated within treatment planning (33%). However, the incidents with the highest average NMRI scores originated during imaging for RT planning (NMRI = 2.0, average NMRI of all events = 1.5), specifically during the documentation of patient positioning and localization of the patient. Incidents were most frequently detected during treatment delivery (30%), and incidents identified at this point also had higher severity scores than other workflow areas (NMRI = 1.6). Incidents identified during on-treatment quality management were also more severe (NMRI = 1.7), and the specific process steps of reviewing portal and CBCT images tended to catch highest-severity incidents. On average, safety barriers caught 46% of all incidents, most frequently at physics chart review, therapist’s chart check, and the review of portal images; however, most of the incidents that pass through a particular safety barrier are not designed to be capable of being captured at that barrier. Conclusions: Incident learning systems can be used to assess the most common points of error origination and detection in radiation oncology. This can help tailor safety improvement efforts and target the highest impact portions of the workflow. The most severe near-miss events tend to originate during simulation, with the most severe near-miss events detected at the time of patient treatment. Safety barriers can be improved to allow earlier detection of near-miss events.« less
Economic impact of medication error: a systematic review.
Walsh, Elaine K; Hansen, Christina Raae; Sahm, Laura J; Kearney, Patricia M; Doherty, Edel; Bradley, Colin P
2017-05-01
Medication error is a significant source of morbidity and mortality among patients. Clinical and cost-effectiveness evidence are required for the implementation of quality of care interventions. Reduction of error-related cost is a key potential benefit of interventions addressing medication error. The aim of this review was to describe and quantify the economic burden associated with medication error. PubMed, Cochrane, Embase, CINAHL, EconLit, ABI/INFORM, Business Source Complete were searched. Studies published 2004-2016 assessing the economic impact of medication error were included. Cost values were expressed in Euro 2015. A narrative synthesis was performed. A total of 4572 articles were identified from database searching, and 16 were included in the review. One study met all applicable quality criteria. Fifteen studies expressed economic impact in monetary terms. Mean cost per error per study ranged from €2.58 to €111 727.08. Healthcare costs were used to measure economic impact in 15 of the included studies with one study measuring litigation costs. Four studies included costs incurred in primary care with the remaining 12 measuring hospital costs. Five studies looked at general medication error in a general population with 11 studies reporting the economic impact of an individual type of medication error or error within a specific patient population. Considerable variability existed between studies in terms of financial cost, patients, settings and errors included. Many were of poor quality. Assessment of economic impact was conducted predominantly in the hospital setting with little assessment of primary care impact. Limited parameters were used to establish economic impact. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Aquatic habitat mapping with an acoustic doppler current profiler: Considerations for data quality
Gaeuman, David; Jacobson, Robert B.
2005-01-01
When mounted on a boat or other moving platform, acoustic Doppler current profilers (ADCPs) can be used to map a wide range of ecologically significant phenomena, including measures of fluid shear, turbulence, vorticity, and near-bed sediment transport. However, the instrument movement necessary for mapping applications can generate significant errors, many of which have not been inadequately described. This report focuses on the mechanisms by which moving-platform errors are generated, and quantifies their magnitudes under typical habitat-mapping conditions. The potential for velocity errors caused by mis-alignment of the instrument?s internal compass are widely recognized, but has not previously been quantified for moving instruments. Numerical analyses show that even relatively minor compass mis-alignments can produce significant velocity errors, depending on the ratio of absolute instrument velocity to the target velocity and on the relative directions of instrument and target motion. A maximum absolute instrument velocity of about 1 m/s is recommended for most mapping applications. Lower velocities are appropriate when making bed velocity measurements, an emerging application that makes use of ADCP bottom-tracking to measure the velocity of sediment particles at the bed. The mechanisms by which heterogeneities in the flow velocity field generate horizontal velocities errors are also quantified, and some basic limitations in the effectiveness of standard error-detection criteria for identifying these errors are described. Bed velocity measurements may be particularly vulnerable to errors caused by spatial variability in the sediment transport field.
Optimizing the performance and structure of the D0 Collie confidence limit evaluator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fishchler, Mark; /Fermilab
2010-07-01
D0 Collie is a program used to perform limit calculations based on ensembles of pseudo-experiments ('PEs'). Since the application of this program to the crucial Higgs mass limit is quite CPU intensive, it has been deemed important to carefully review this program, with an eye toward identifying and implementing potential performance improvements. At the same time, we identify any coding errors or opportunities for potential structural (or algorithm) improvement discovered in the course of gaining sufficient understanding of the workings of Collie to sensibly explore for optimizations. Based on a careful analysis of the program, a series of code changesmore » with potential for improving performance has been identified. The implementation and evaluation of the most important parts of this series has been done, with gratifying speedup results. The bottom line: We have identified and implemented changes leading to a factor of 2.19 speedup in the example program provided, and expected to translate to a factor of roughly 4 speedup in typical realistic usage.« less
Artefacts found in computed radiography.
Cesar, L J; Schueler, B A; Zink, F E; Daly, T R; Taubel, J P; Jorgenson, L L
2001-02-01
Artefacts on radiographic images are distracting and may compromise accurate diagnosis. Although most artefacts that occur in conventional radiography have become familiar, computed radiography (CR) systems produce artefacts that differ from those found in conventional radiography. We have encountered a variety of artefacts in CR images that were produced from four different models plate reader. These artefacts have been identified and traced to the imaging plate, plate reader, image processing software or laser printer or to operator error. Understanding the potential sources of CR artefacts will aid in identifying and resolving problems quickly and help prevent future occurrences.
Yang, F; Cao, N; Young, L; Howard, J; Logan, W; Arbuckle, T; Sponseller, P; Korssjoen, T; Meyer, J; Ford, E
2015-06-01
Though failure mode and effects analysis (FMEA) is becoming more widely adopted for risk assessment in radiation therapy, to our knowledge, its output has never been validated against data on errors that actually occur. The objective of this study was to perform FMEA of a stereotactic body radiation therapy (SBRT) treatment planning process and validate the results against data recorded within an incident learning system. FMEA on the SBRT treatment planning process was carried out by a multidisciplinary group including radiation oncologists, medical physicists, dosimetrists, and IT technologists. Potential failure modes were identified through a systematic review of the process map. Failure modes were rated for severity, occurrence, and detectability on a scale of one to ten and risk priority number (RPN) was computed. Failure modes were then compared with historical reports identified as relevant to SBRT planning within a departmental incident learning system that has been active for two and a half years. Differences between FMEA anticipated failure modes and existing incidents were identified. FMEA identified 63 failure modes. RPN values for the top 25% of failure modes ranged from 60 to 336. Analysis of the incident learning database identified 33 reported near-miss events related to SBRT planning. Combining both methods yielded a total of 76 possible process failures, of which 13 (17%) were missed by FMEA while 43 (57%) identified by FMEA only. When scored for RPN, the 13 events missed by FMEA ranked within the lower half of all failure modes and exhibited significantly lower severity relative to those identified by FMEA (p = 0.02). FMEA, though valuable, is subject to certain limitations. In this study, FMEA failed to identify 17% of actual failure modes, though these were of lower risk. Similarly, an incident learning system alone fails to identify a large number of potentially high-severity process errors. Using FMEA in combination with incident learning may render an improved overview of risks within a process.
Boubchir, Larbi; Touati, Youcef; Daachi, Boubaker; Chérif, Arab Ali
2015-08-01
In thought-based steering of robots, error potentials (ErrP) can appear when the action resulting from the brain-machine interface (BMI) classifier/controller does not correspond to the user's thought. Using the Steady State Visual Evoked Potentials (SSVEP) techniques, ErrP, which appear when a classification error occurs, are not easily recognizable by only examining the temporal or frequency characteristics of EEG signals. A supplementary classification process is therefore needed to identify them in order to stop the course of the action and back up to a recovery state. This paper presents a set of time-frequency (t-f) features for the detection and classification of EEG ErrP in extra-brain activities due to misclassification observed by a user exploiting non-invasive BMI and robot control in the task space. The proposed features are able to characterize and detect ErrP activities in the t-f domain. These features are derived from the information embedded in the t-f representation of EEG signals, and include the Instantaneous Frequency (IF), t-f information complexity, SVD information, energy concentration and sub-bands' energies. The experiment results on real EEG data show that the use of the proposed t-f features for detecting and classifying EEG ErrP achieved an overall classification accuracy up to 97% for 50 EEG segments using 2-class SVM classifier.
A cerebellar thalamic cortical circuit for error-related cognitive control.
Ide, Jaime S; Li, Chiang-shan R
2011-01-01
Error detection and behavioral adjustment are core components of cognitive control. Numerous studies have focused on the anterior cingulate cortex (ACC) as a critical locus of this executive function. Our previous work showed greater activation in the dorsal ACC and subcortical structures during error detection, and activation in the ventrolateral prefrontal cortex (VLPFC) during post-error slowing (PES) in a stop signal task (SST). However, the extent of error-related cortical or subcortical activation across subjects was not correlated with VLPFC activity during PES. So then, what causes VLPFC activation during PES? To address this question, we employed Granger causality mapping (GCM) and identified regions that Granger caused VLPFC activation in 54 adults performing the SST during fMRI. These brain regions, including the supplementary motor area (SMA), cerebellum, a pontine region, and medial thalamus, represent potential targets responding to errors in a way that could influence VLPFC activation. In confirmation of this hypothesis, the error-related activity of these regions correlated with VLPFC activation during PES, with the cerebellum showing the strongest association. The finding that cerebellar activation Granger causes prefrontal activity during behavioral adjustment supports a cerebellar function in cognitive control. Furthermore, multivariate GCA described the "flow of information" across these brain regions. Through connectivity with the thalamus and SMA, the cerebellum mediates error and post-error processing in accord with known anatomical projections. Taken together, these new findings highlight the role of the cerebello-thalamo-cortical pathway in an executive function that has heretofore largely been ascribed to the anterior cingulate-prefrontal cortical circuit. Copyright © 2010 Elsevier Inc. All rights reserved.
Schiffino, Felipe L; Zhou, Vivian; Holland, Peter C
2014-02-01
Within most contemporary learning theories, reinforcement prediction error, the difference between the obtained and expected reinforcer value, critically influences associative learning. In some theories, this prediction error determines the momentary effectiveness of the reinforcer itself, such that the same physical event produces more learning when its presentation is surprising than when it is expected. In other theories, prediction error enhances attention to potential cues for that reinforcer by adjusting cue-specific associability parameters, biasing the processing of those stimuli so that they more readily enter into new associations in the future. A unique feature of these latter theories is that such alterations in stimulus associability must be represented in memory in an enduring fashion. Indeed, considerable data indicate that altered associability may be expressed days after its induction. Previous research from our laboratory identified brain circuit elements critical to the enhancement of stimulus associability by the omission of an expected event, and to the subsequent expression of that altered associability in more rapid learning. Here, for the first time, we identified a brain region, the posterior parietal cortex, as a potential site for a memorial representation of altered stimulus associability. In three experiments using rats and a serial prediction task, we found that intact posterior parietal cortex function was essential during the encoding, consolidation, and retrieval of an associability memory enhanced by surprising omissions. We discuss these new results in the context of our previous findings and additional plausible frontoparietal and subcortical networks. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Law, Katherine E; Ray, Rebecca D; D'Angelo, Anne-Lise D; Cohen, Elaine R; DiMarco, Shannon M; Linsmeier, Elyse; Wiegmann, Douglas A; Pugh, Carla M
The study aim was to determine whether residents' error management strategies changed across 2 simulated laparoscopic ventral hernia (LVH) repair procedures after receiving feedback on their initial performance. We hypothesize that error detection and recovery strategies would improve during the second procedure without hands-on practice. Retrospective review of participant procedural performances of simulated laparoscopic ventral herniorrhaphy. A total of 3 investigators reviewed procedure videos to identify surgical errors. Errors were deconstructed. Error management events were noted, including error identification and recovery. Residents performed the simulated LVH procedures during a course on advanced laparoscopy. Participants had 30 minutes to complete a LVH procedure. After verbal and simulator feedback, residents returned 24 hours later to perform a different, more difficult simulated LVH repair. Senior (N = 7; postgraduate year 4-5) residents in attendance at the course participated in this study. In the first LVH procedure, residents committed 121 errors (M = 17.14, standard deviation = 4.38). Although the number of errors increased to 146 (M = 20.86, standard deviation = 6.15) during the second procedure, residents progressed further in the second procedure. There was no significant difference in the number of errors committed for both procedures, but errors shifted to the late stage of the second procedure. Residents changed the error types that they attempted to recover (χ 2 5 =24.96, p<0.001). For the second procedure, recovery attempts increased for action and procedure errors, but decreased for strategy errors. Residents also recovered the most errors in the late stage of the second procedure (p < 0.001). Residents' error management strategies changed between procedures following verbal feedback on their initial performance and feedback from the simulator. Errors and recovery attempts shifted to later steps during the second procedure. This may reflect residents' error management success in the earlier stages, which allowed further progression in the second simulation. Incorporating error recognition and management opportunities into surgical training could help track residents' learning curve and provide detailed, structured feedback on technical and decision-making skills. Copyright © 2016 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
van de Plas, Afke; Slikkerveer, Mariëlle; Hoen, Saskia; Schrijnemakers, Rick; Driessen, Johanna; de Vries, Frank; van den Bemt, Patricia
2017-01-01
In this controlled before-after study the effect of improvements, derived from Lean Six Sigma strategy, on parenteral medication administration errors and the potential risk of harm was determined. During baseline measurement, on control versus intervention ward, at least one administration error occurred in 14 (74%) and 6 (46%) administrations with potential risk of harm in 6 (32%) and 1 (8%) administrations. Most administration errors with high potential risk of harm occurred in bolus injections: 8 (57%) versus 2 (67%) bolus injections were injected too fast with a potential risk of harm in 6 (43%) and 1 (33%) bolus injections on control and intervention ward. Implemented improvement strategies, based on major causes of too fast administration of bolus injections, were: Substitution of bolus injections by infusions, education, availability of administration information and drug round tabards. Post intervention, on the control ward in 76 (76%) administrations at least one error was made (RR 1.03; CI95:0.77-1.38), with a potential risk of harm in 14 (14%) administrations (RR 0.45; CI95:0.20-1.02). In 40 (68%) administrations on the intervention ward at least one error occurred (RR 1.47; CI95:0.80-2.71) but no administrations were associated with a potential risk of harm. A shift in wrong duration administration errors from bolus injections to infusions, with a reduction of potential risk of harm, seems to have occurred on the intervention ward. Although data are insufficient to prove an effect, Lean Six Sigma was experienced as a suitable strategy to select tailored improvements. Further studies are required to prove the effect of the strategy on parenteral medication administration errors.
van de Plas, Afke; Slikkerveer, Mariëlle; Hoen, Saskia; Schrijnemakers, Rick; Driessen, Johanna; de Vries, Frank; van den Bemt, Patricia
2017-01-01
In this controlled before-after study the effect of improvements, derived from Lean Six Sigma strategy, on parenteral medication administration errors and the potential risk of harm was determined. During baseline measurement, on control versus intervention ward, at least one administration error occurred in 14 (74%) and 6 (46%) administrations with potential risk of harm in 6 (32%) and 1 (8%) administrations. Most administration errors with high potential risk of harm occurred in bolus injections: 8 (57%) versus 2 (67%) bolus injections were injected too fast with a potential risk of harm in 6 (43%) and 1 (33%) bolus injections on control and intervention ward. Implemented improvement strategies, based on major causes of too fast administration of bolus injections, were: Substitution of bolus injections by infusions, education, availability of administration information and drug round tabards. Post intervention, on the control ward in 76 (76%) administrations at least one error was made (RR 1.03; CI95:0.77-1.38), with a potential risk of harm in 14 (14%) administrations (RR 0.45; CI95:0.20-1.02). In 40 (68%) administrations on the intervention ward at least one error occurred (RR 1.47; CI95:0.80-2.71) but no administrations were associated with a potential risk of harm. A shift in wrong duration administration errors from bolus injections to infusions, with a reduction of potential risk of harm, seems to have occurred on the intervention ward. Although data are insufficient to prove an effect, Lean Six Sigma was experienced as a suitable strategy to select tailored improvements. Further studies are required to prove the effect of the strategy on parenteral medication administration errors. PMID:28674608
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weston, Louise Marie
2007-09-01
A recent report on criticality accidents in nuclear facilities indicates that human error played a major role in a significant number of incidents with serious consequences and that some of these human errors may be related to the emotional state of the individual. A pre-shift test to detect a deleterious emotional state could reduce the occurrence of such errors in critical operations. The effectiveness of pre-shift testing is a challenge because of the need to gather predictive data in a relatively short test period and the potential occurrence of learning effects due to a requirement for frequent testing. This reportmore » reviews the different types of reliability and validity methods and testing and statistical analysis procedures to validate measures of emotional state. The ultimate value of a validation study depends upon the percentage of human errors in critical operations that are due to the emotional state of the individual. A review of the literature to identify the most promising predictors of emotional state for this application is highly recommended.« less
Wilf-Miron, R; Lewenhoff, I; Benyamini, Z; Aviram, A
2003-02-01
The development of a medical risk management programme based on the aviation safety approach and its implementation in a large ambulatory healthcare organisation is described. The following key safety principles were applied: (1). errors inevitably occur and usually derive from faulty system design, not from negligence; (2). accident prevention should be an ongoing process based on open and full reporting; (3). major accidents are only the "tip of the iceberg" of processes that indicate possibilities for organisational learning. Reporting physicians were granted immunity, which encouraged open reporting of errors. A telephone "hotline" served the medical staff for direct reporting and receipt of emotional support and medical guidance. Any adverse event which had learning potential was debriefed, while focusing on the human cause of error within a systemic context. Specific recommendations were formulated to rectify processes conducive to error when failures were identified. During the first 5 years of implementation, the aviation safety concept and tools were successfully adapted to ambulatory care, fostering a culture of greater concern for patient safety through risk management while providing support to the medical staff.
Wilf-Miron, R; Lewenhoff, I; Benyamini, Z; Aviram, A
2003-01-01
The development of a medical risk management programme based on the aviation safety approach and its implementation in a large ambulatory healthcare organisation is described. The following key safety principles were applied: (1) errors inevitably occur and usually derive from faulty system design, not from negligence; (2) accident prevention should be an ongoing process based on open and full reporting; (3) major accidents are only the "tip of the iceberg" of processes that indicate possibilities for organisational learning. Reporting physicians were granted immunity, which encouraged open reporting of errors. A telephone "hotline" served the medical staff for direct reporting and receipt of emotional support and medical guidance. Any adverse event which had learning potential was debriefed, while focusing on the human cause of error within a systemic context. Specific recommendations were formulated to rectify processes conducive to error when failures were identified. During the first 5 years of implementation, the aviation safety concept and tools were successfully adapted to ambulatory care, fostering a culture of greater concern for patient safety through risk management while providing support to the medical staff. PMID:12571343
Westbrook, J I; Li, L; Raban, M Z; Baysari, M T; Prgomet, M; Georgiou, A; Kim, T; Lake, R; McCullagh, C; Dalla-Pozza, L; Karnon, J; O'Brien, T A; Ambler, G; Day, R; Cowell, C T; Gazarian, M; Worthington, R; Lehmann, C U; White, L; Barbaric, D; Gardo, A; Kelly, M; Kennedy, P
2016-01-01
Introduction Medication errors are the most frequent cause of preventable harm in hospitals. Medication management in paediatric patients is particularly complex and consequently potential for harms are greater than in adults. Electronic medication management (eMM) systems are heralded as a highly effective intervention to reduce adverse drug events (ADEs), yet internationally evidence of their effectiveness in paediatric populations is limited. This study will assess the effectiveness of an eMM system to reduce medication errors, ADEs and length of stay (LOS). The study will also investigate system impact on clinical work processes. Methods and analysis A stepped-wedge cluster randomised controlled trial (SWCRCT) will measure changes pre-eMM and post-eMM system implementation in prescribing and medication administration error (MAE) rates, potential and actual ADEs, and average LOS. In stage 1, 8 wards within the first paediatric hospital will be randomised to receive the eMM system 1 week apart. In stage 2, the second paediatric hospital will randomise implementation of a modified eMM and outcomes will be assessed. Prescribing errors will be identified through record reviews, and MAEs through direct observation of nurses and record reviews. Actual and potential severity will be assigned. Outcomes will be assessed at the patient-level using mixed models, taking into account correlation of admissions within wards and multiple admissions for the same patient, with adjustment for potential confounders. Interviews and direct observation of clinicians will investigate the effects of the system on workflow. Data from site 1 will be used to develop improvements in the eMM and implemented at site 2, where the SWCRCT design will be repeated (stage 2). Ethics and dissemination The research has been approved by the Human Research Ethics Committee of the Sydney Children's Hospitals Network and Macquarie University. Results will be reported through academic journals and seminar and conference presentations. Trial registration number Australian New Zealand Clinical Trials Registry (ANZCTR) 370325. PMID:27797997
Westbrook, J I; Li, L; Raban, M Z; Baysari, M T; Mumford, V; Prgomet, M; Georgiou, A; Kim, T; Lake, R; McCullagh, C; Dalla-Pozza, L; Karnon, J; O'Brien, T A; Ambler, G; Day, R; Cowell, C T; Gazarian, M; Worthington, R; Lehmann, C U; White, L; Barbaric, D; Gardo, A; Kelly, M; Kennedy, P
2016-10-21
Medication errors are the most frequent cause of preventable harm in hospitals. Medication management in paediatric patients is particularly complex and consequently potential for harms are greater than in adults. Electronic medication management (eMM) systems are heralded as a highly effective intervention to reduce adverse drug events (ADEs), yet internationally evidence of their effectiveness in paediatric populations is limited. This study will assess the effectiveness of an eMM system to reduce medication errors, ADEs and length of stay (LOS). The study will also investigate system impact on clinical work processes. A stepped-wedge cluster randomised controlled trial (SWCRCT) will measure changes pre-eMM and post-eMM system implementation in prescribing and medication administration error (MAE) rates, potential and actual ADEs, and average LOS. In stage 1, 8 wards within the first paediatric hospital will be randomised to receive the eMM system 1 week apart. In stage 2, the second paediatric hospital will randomise implementation of a modified eMM and outcomes will be assessed. Prescribing errors will be identified through record reviews, and MAEs through direct observation of nurses and record reviews. Actual and potential severity will be assigned. Outcomes will be assessed at the patient-level using mixed models, taking into account correlation of admissions within wards and multiple admissions for the same patient, with adjustment for potential confounders. Interviews and direct observation of clinicians will investigate the effects of the system on workflow. Data from site 1 will be used to develop improvements in the eMM and implemented at site 2, where the SWCRCT design will be repeated (stage 2). The research has been approved by the Human Research Ethics Committee of the Sydney Children's Hospitals Network and Macquarie University. Results will be reported through academic journals and seminar and conference presentations. Australian New Zealand Clinical Trials Registry (ANZCTR) 370325. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Exome Sequence Analysis of 14 Families With High Myopia.
Kloss, Bethany A; Tompson, Stuart W; Whisenhunt, Kristina N; Quow, Krystina L; Huang, Samuel J; Pavelec, Derek M; Rosenberg, Thomas; Young, Terri L
2017-04-01
To identify causal gene mutations in 14 families with autosomal dominant (AD) high myopia using exome sequencing. Select individuals from 14 large Caucasian families with high myopia were exome sequenced. Gene variants were filtered to identify potential pathogenic changes. Sanger sequencing was used to confirm variants in original DNA, and to test for disease cosegregation in additional family members. Candidate genes and chromosomal loci previously associated with myopic refractive error and its endophenotypes were comprehensively screened. In 14 high myopia families, we identified 73 rare and 31 novel gene variants as candidates for pathogenicity. In seven of these families, two of the novel and eight of the rare variants were within known myopia loci. A total of 104 heterozygous nonsynonymous rare variants in 104 genes were identified in 10 out of 14 probands. Each variant cosegregated with affection status. No rare variants were identified in genes known to cause myopia or in genes closest to published genome-wide association study association signals for refractive error or its endophenotypes. Whole exome sequencing was performed to determine gene variants implicated in the pathogenesis of AD high myopia. This study provides new genes for consideration in the pathogenesis of high myopia, and may aid in the development of genetic profiling of those at greatest risk for attendant ocular morbidities of this disorder.
Using a Delphi Method to Identify Human Factors Contributing to Nursing Errors.
Roth, Cheryl; Brewer, Melanie; Wieck, K Lynn
2017-07-01
The purpose of this study was to identify human factors associated with nursing errors. Using a Delphi technique, this study used feedback from a panel of nurse experts (n = 25) on an initial qualitative survey questionnaire followed by summarizing the results with feedback and confirmation. Synthesized factors regarding causes of errors were incorporated into a quantitative Likert-type scale, and the original expert panel participants were queried a second time to validate responses. The list identified 24 items as most common causes of nursing errors, including swamping and errors made by others that nurses are expected to recognize and fix. The responses provided a consensus top 10 errors list based on means with heavy workload and fatigue at the top of the list. The use of the Delphi survey established consensus and developed a platform upon which future study of nursing errors can evolve as a link to future solutions. This list of human factors in nursing errors should serve to stimulate dialogue among nurses about how to prevent errors and improve outcomes. Human and system failures have been the subject of an abundance of research, yet nursing errors continue to occur. © 2016 Wiley Periodicals, Inc.
Cao, Hui; Stetson, Peter; Hripcsak, George
2003-01-01
Many types of medical errors occur in and outside of hospitals, some of which have very serious consequences and increase cost. Identifying errors is a critical step for managing and preventing them. In this study, we assessed the explicit reporting of medical errors in the electronic record. We used five search terms "mistake," "error," "incorrect," "inadvertent," and "iatrogenic" to survey several sets of narrative reports including discharge summaries, sign-out notes, and outpatient notes from 1991 to 2000. We manually reviewed all the positive cases and identified them based on the reporting of physicians. We identified 222 explicitly reported medical errors. The positive predictive value varied with different keywords. In general, the positive predictive value for each keyword was low, ranging from 3.4 to 24.4%. Therapeutic-related errors were the most common reported errors and these reported therapeutic-related errors were mainly medication errors. Keyword searches combined with manual review indicated some medical errors that were reported in medical records. It had a low sensitivity and a moderate positive predictive value, which varied by search term. Physicians were most likely to record errors in the Hospital Course and History of Present Illness sections of discharge summaries. The reported errors in medical records covered a broad range and were related to several types of care providers as well as non-health care professionals.
Furness, Alan R; Callan, Richard S; Mackert, J Rodway; Mollica, Anthony G
2018-01-01
The aim of this study was to evaluate the effectiveness of the Planmeca Compare software in identifying and quantifying a common critical error in dental students' crown preparations. In 2014-17, a study was conducted at one U.S. dental school that evaluated an ideal crown prep made by a faculty member on a dentoform to modified preps. Two types of preparation errors were created by the addition of flowable composite to the occlusal surface of identical dies of the preparations to represent the underreduction of the distolingual cusp. The error was divided into two classes: the minor class allowed for 1 mm of occlusal clearance, and the major class allowed for no occlusal clearance. The preparations were then digitally evaluated against the ideal preparation using Planmeca Compare. Percent comparison values were obtained from each trial and averaged together. False positives and false negatives were also identified and used to determine the accuracy of the evaluation. Critical errors that did not involve a substantial change in the surface area of the preparation were inconsistently identified. Within the limitations of this study, the authors concluded that the Compare software was unable to consistently identify common critical errors within an acceptable degree of error.
Poon, Eric G; Cina, Jennifer L; Churchill, William W; Mitton, Patricia; McCrea, Michelle L; Featherstone, Erica; Keohane, Carol A; Rothschild, Jeffrey M; Bates, David W; Gandhi, Tejal K
2005-01-01
We performed a direct observation pre-post study to evaluate the impact of barcode technology on medication dispensing errors and potential adverse drug events in the pharmacy of a tertiary-academic medical center. We found that barcode technology significantly reduced the rate of target dispensing errors leaving the pharmacy by 85%, from 0.37% to 0.06%. The rate of potential adverse drug events (ADEs) due to dispensing errors was also significantly reduced by 63%, from 0.19% to 0.069%. In a 735-bed hospital where 6 million doses of medications are dispensed per year, this technology is expected to prevent about 13,000 dispensing errors and 6,000 potential ADEs per year. PMID:16779372
tPA Prescription and Administration Errors within a Regional Stroke System
Chung, Lee S; Tkach, Aleksander; Lingenfelter, Erin M; Dehoney, Sarah; Rollo, Jeannie; de Havenon, Adam; DeWitt, Lucy Dana; Grantz, Matthew Ryan; Wang, Haimei; Wold, Jana J; Hannon, Peter M; Weathered, Natalie R; Majersik, Jennifer J
2015-01-01
Background IV tPA utilization in acute ischemic stroke (AIS) requires weight-based dosing and a standardized infusion rate. In our regional network, we have tried to minimize tPA dosing errors. We describe the frequency and types of tPA administration errors made in our comprehensive stroke center (CSC) and at community hospitals (CHs) prior to transfer. Methods Using our stroke quality database, we extracted clinical and pharmacy information on all patients who received IV tPA from 2010–11 at the CSC or CH prior to transfer. All records were analyzed for the presence of inclusion/exclusion criteria deviations or tPA errors in prescription, reconstitution, dispensing, or administration, and analyzed for association with outcomes. Results We identified 131 AIS cases treated with IV tPA: 51% female; mean age 68; 32% treated at CSC, 68% at CH (including 26% by telestroke) from 22 CHs. tPA prescription and administration errors were present in 64% of all patients (41% CSC, 75% CH, p<0.001), the most common being incorrect dosage for body weight (19% CSC, 55% CH, p<0.001). Of the 27 overdoses, there were 3 deaths due to systemic hemorrhage or ICH. Nonetheless, outcomes (parenchymal hematoma, mortality, mRS) did not differ between CSC and CH patients nor between those with and without errors. Conclusion Despite focus on minimization of tPA administration errors in AIS patients, such errors were very common in our regional stroke system. Although an association between tPA errors and stroke outcomes was not demonstrated, quality assurance mechanisms are still necessary to reduce potentially dangerous, avoidable errors. PMID:26698642
Aryeetey, Genevieve Cecilia; Jehu-Appiah, Caroline; Spaan, Ernst; D'Exelle, Ben; Agyepong, Irene; Baltussen, Rob
2010-12-01
To evaluate the effectiveness of three alternative strategies to identify poor households: means testing (MT), proxy means testing (PMT) and participatory wealth ranking (PWR) in urban, rural and semi-urban settings in Ghana. The primary motivation was to inform implementation of the National Health Insurance policy of premium exemptions for the poorest households. Survey of 145-147 households per setting to collect data on consumption expenditure to estimate MT measures and of household assets to estimate PMT measures. We organized focus group discussions to derive PWR measures. We compared errors of inclusion and exclusion of PMT and PWR relative to MT, the latter being considered the gold standard measure to identify poor households. Compared to MT, the errors of exclusion and inclusion of PMT ranged between 0.46-0.63 and 0.21-0.36, respectively, and of PWR between 0.03-0.73 and 0.17-0.60, respectively, depending on the setting. Proxy means testing and PWR have considerable errors of exclusion and inclusion in comparison with MT. PWR is a subjective measure of poverty and has appeal because it reflects community's perceptions on poverty. However, as its definition of the poor varies across settings, its acceptability as a uniform strategy to identify the poor in Ghana may be questionable. PMT and MT are potential strategies to identify the poor, and their relative societal attractiveness should be judged in a broader economic analysis. This study also holds relevance to other programmes that require identification of the poor in low-income countries. © 2010 Blackwell Publishing Ltd.
Shi, Joy; Korsiak, Jill; Roth, Daniel E
2018-03-01
We aimed to demonstrate the use of jackknife residuals to take advantage of the longitudinal nature of available growth data in assessing potential biologically implausible values and outliers. Artificial errors were induced in 5% of length, weight, and head circumference measurements, measured on 1211 participants from the Maternal Vitamin D for Infant Growth (MDIG) trial from birth to 24 months of age. Each child's sex- and age-standardized z-score or raw measurements were regressed as a function of age in child-specific models. Each error responsible for a biologically implausible decrease between a consecutive pair of measurements was identified based on the higher of the two absolute values of jackknife residuals in each pair. In further analyses, outliers were identified as those values beyond fixed cutoffs of the jackknife residuals (e.g., greater than +5 or less than -5 in primary analyses). Kappa, sensitivity, and specificity were calculated over 1000 simulations to assess the ability of the jackknife residual method to detect induced errors and to compare these methods with the use of conditional growth percentiles and conventional cross-sectional methods. Among the induced errors that resulted in a biologically implausible decrease in measurement between two consecutive values, the jackknife residual method identified the correct value in 84.3%-91.5% of these instances when applied to the sex- and age-standardized z-scores, with kappa values ranging from 0.685 to 0.795. Sensitivity and specificity of the jackknife method were higher than those of the conditional growth percentile method, but specificity was lower than for conventional cross-sectional methods. Using jackknife residuals provides a simple method to identify biologically implausible values and outliers in longitudinal child growth data sets in which each child contributes at least 4 serial measurements. Crown Copyright © 2018. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Damato, Antonio L., E-mail: adamato@lroc.harvard.edu; Viswanathan, Akila N.; Don, Sarah M.
2014-10-15
Purpose: To investigate the use of a system using electromagnetic tracking (EMT), post-processing and an error-detection algorithm for detecting errors and resolving uncertainties in high-dose-rate brachytherapy catheter digitization for treatment planning. Methods: EMT was used to localize 15 catheters inserted into a phantom using a stepwise acquisition technique. Five distinct acquisition experiments were performed. Noise associated with the acquisition was calculated. The dwell location configuration was extracted from the EMT data. A CT scan of the phantom was performed, and five distinct catheter digitization sessions were performed. No a priori registration of the CT scan coordinate system with the EMTmore » coordinate system was performed. CT-based digitization was automatically extracted from the brachytherapy plan DICOM files (CT), and rigid registration was performed between EMT and CT dwell positions. EMT registration error was characterized in terms of the mean and maximum distance between corresponding EMT and CT dwell positions per catheter. An algorithm for error detection and identification was presented. Three types of errors were systematically simulated: swap of two catheter numbers, partial swap of catheter number identification for parts of the catheters (mix), and catheter-tip shift. Error-detection sensitivity (number of simulated scenarios correctly identified as containing an error/number of simulated scenarios containing an error) and specificity (number of scenarios correctly identified as not containing errors/number of correct scenarios) were calculated. Catheter identification sensitivity (number of catheters correctly identified as erroneous across all scenarios/number of erroneous catheters across all scenarios) and specificity (number of catheters correctly identified as correct across all scenarios/number of correct catheters across all scenarios) were calculated. The mean detected and identified shift was calculated. Results: The maximum noise ±1 standard deviation associated with the EMT acquisitions was 1.0 ± 0.1 mm, and the mean noise was 0.6 ± 0.1 mm. Registration of all the EMT and CT dwell positions was associated with a mean catheter error of 0.6 ± 0.2 mm, a maximum catheter error of 0.9 ± 0.4 mm, a mean dwell error of 1.0 ± 0.3 mm, and a maximum dwell error of 1.3 ± 0.7 mm. Error detection and catheter identification sensitivity and specificity of 100% were observed for swap, mix and shift (≥2.6 mm for error detection; ≥2.7 mm for catheter identification) errors. A mean detected shift of 1.8 ± 0.4 mm and a mean identified shift of 1.9 ± 0.4 mm were observed. Conclusions: Registration of the EMT dwell positions to the CT dwell positions was possible with a residual mean error per catheter of 0.6 ± 0.2 mm and a maximum error for any dwell of 1.3 ± 0.7 mm. These low residual registration errors show that quality assurance of the general characteristics of the catheters and of possible errors affecting one specific dwell position is possible. The sensitivity and specificity of the catheter digitization verification algorithm was 100% for swap and mix errors and for shifts ≥2.6 mm. On average, shifts ≥1.8 mm were detected, and shifts ≥1.9 mm were detected and identified.« less
Processor register error correction management
Bose, Pradip; Cher, Chen-Yong; Gupta, Meeta S.
2016-12-27
Processor register protection management is disclosed. In embodiments, a method of processor register protection management can include determining a sensitive logical register for executable code generated by a compiler, generating an error-correction table identifying the sensitive logical register, and storing the error-correction table in a memory accessible by a processor. The processor can be configured to generate a duplicate register of the sensitive logical register identified by the error-correction table.
Dissociable Genetic Contributions to Error Processing: A Multimodal Neuroimaging Study
Agam, Yigal; Vangel, Mark; Roffman, Joshua L.; Gallagher, Patience J.; Chaponis, Jonathan; Haddad, Stephen; Goff, Donald C.; Greenberg, Jennifer L.; Wilhelm, Sabine; Smoller, Jordan W.; Manoach, Dara S.
2014-01-01
Background Neuroimaging studies reliably identify two markers of error commission: the error-related negativity (ERN), an event-related potential, and functional MRI activation of the dorsal anterior cingulate cortex (dACC). While theorized to reflect the same neural process, recent evidence suggests that the ERN arises from the posterior cingulate cortex not the dACC. Here, we tested the hypothesis that these two error markers also have different genetic mediation. Methods We measured both error markers in a sample of 92 comprised of healthy individuals and those with diagnoses of schizophrenia, obsessive-compulsive disorder or autism spectrum disorder. Participants performed the same task during functional MRI and simultaneously acquired magnetoencephalography and electroencephalography. We examined the mediation of the error markers by two single nucleotide polymorphisms: dopamine D4 receptor (DRD4) C-521T (rs1800955), which has been associated with the ERN and methylenetetrahydrofolate reductase (MTHFR) C677T (rs1801133), which has been associated with error-related dACC activation. We then compared the effects of each polymorphism on the two error markers modeled as a bivariate response. Results We replicated our previous report of a posterior cingulate source of the ERN in healthy participants in the schizophrenia and obsessive-compulsive disorder groups. The effect of genotype on error markers did not differ significantly by diagnostic group. DRD4 C-521T allele load had a significant linear effect on ERN amplitude, but not on dACC activation, and this difference was significant. MTHFR C677T allele load had a significant linear effect on dACC activation but not ERN amplitude, but the difference in effects on the two error markers was not significant. Conclusions DRD4 C-521T, but not MTHFR C677T, had a significant differential effect on two canonical error markers. Together with the anatomical dissociation between the ERN and error-related dACC activation, these findings suggest that these error markers have different neural and genetic mediation. PMID:25010186
Sethuraman, Usha; Kannikeswaran, Nirupama; Murray, Kyle P; Zidan, Marwan A; Chamberlain, James M
2015-06-01
Prescription errors occur frequently in pediatric emergency departments (PEDs).The effect of computerized physician order entry (CPOE) with electronic medication alert system (EMAS) on these is unknown. The objective was to compare prescription errors rates before and after introduction of CPOE with EMAS in a PED. The hypothesis was that CPOE with EMAS would significantly reduce the rate and severity of prescription errors in the PED. A prospective comparison of a sample of outpatient, medication prescriptions 5 months before and after CPOE with EMAS implementation (7,268 before and 7,292 after) was performed. Error types and rates, alert types and significance, and physician response were noted. Medication errors were deemed significant if there was a potential to cause life-threatening injury, failure of therapy, or an adverse drug effect. There was a significant reduction in the errors per 100 prescriptions (10.4 before vs. 7.3 after; absolute risk reduction = 3.1, 95% confidence interval [CI] = 2.2 to 4.0). Drug dosing error rates decreased from 8 to 5.4 per 100 (absolute risk reduction = 2.6, 95% CI = 1.8 to 3.4). Alerts were generated for 29.6% of prescriptions, with 45% involving drug dose range checking. The sensitivity of CPOE with EMAS in identifying errors in prescriptions was 45.1% (95% CI = 40.8% to 49.6%), and the specificity was 57% (95% CI = 55.6% to 58.5%). Prescribers modified 20% of the dosing alerts, resulting in the error not reaching the patient. Conversely, 11% of true dosing alerts for medication errors were overridden by the prescribers: 88 (11.3%) resulted in medication errors, and 684 (88.6%) were false-positive alerts. A CPOE with EMAS was associated with a decrease in overall prescription errors in our PED. Further system refinements are required to reduce the high false-positive alert rates. © 2015 by the Society for Academic Emergency Medicine.
Schiffinger, Michael; Latzke, Markus; Steyrer, Johannes
2016-01-01
Safety climate (SC) and more recently patient engagement (PE) have been identified as potential determinants of patient safety, but conceptual and empirical studies combining both are lacking. On the basis of extant theories and concepts in safety research, this study investigates the effect of PE in conjunction with SC on perceived error occurrence (pEO) in hospitals, controlling for various staff-, patient-, and hospital-related variables as well as the amount of stress and (lack of) organizational support experienced by staff. Besides the main effects of PE and SC on error occurrence, their interaction is examined, too. In 66 hospital units, 4,345 patients assessed the degree of PE, and 811 staff assessed SC and pEO. PE was measured with a new instrument, capturing its core elements according to a recent literature review: Information Provision (both active and passive) and Activation and Collaboration. SC and pEO were measured with validated German-language questionnaires. Besides standard regression and correlational analyses, partial least squares analysis was employed to model the main and interaction effects of PE and SC on pEO, also controlling for stress and (lack of) support perceived by staff, various staff and patient attributes, and potential single-source bias. Both PE and SC are associated with lower pEO, to a similar extent. The joint effect of these predictors suggests a substitution rather than mutually reinforcing interaction. Accounting for control variables and/or potential single-source bias slightly attenuates some effects without altering the results. Ignoring PE potentially amounts to forgoing a potential source of additional safety. On the other hand, despite the abovementioned substitution effect and conjectures of SC being inert, PE should not be considered as a replacement for SC.
Exploratory Factor Analysis of Reading, Spelling, and Math Errors
ERIC Educational Resources Information Center
O'Brien, Rebecca; Pan, Xingyu; Courville, Troy; Bray, Melissa A.; Breaux, Kristina; Avitia, Maria; Choi, Dowon
2017-01-01
Norm-referenced error analysis is useful for understanding individual differences in students' academic skill development and for identifying areas of skill strength and weakness. The purpose of the present study was to identify underlying connections between error categories across five language and math subtests of the Kaufman Test of…
System review: a method for investigating medical errors in healthcare settings.
Alexander, G L; Stone, T T
2000-01-01
System analysis is a process of evaluating objectives, resources, structure, and design of businesses. System analysis can be used by leaders to collaboratively identify breakthrough opportunities to improve system processes. In healthcare systems, system analysis can be used to review medical errors (system occurrences) that may place patients at risk for injury, disability, and/or death. This study utilizes a case management approach to identify medical errors. Utilizing an interdisciplinary approach, a System Review Team was developed to identify trends in system occurrences, facilitate communication, and enhance the quality of patient care by reducing medical errors.
Prescribing Errors Involving Medication Dosage Forms
Lesar, Timothy S
2002-01-01
CONTEXT Prescribing errors involving medication dose formulations have been reported to occur frequently in hospitals. No systematic evaluations of the characteristics of errors related to medication dosage formulation have been performed. OBJECTIVE To quantify the characteristics, frequency, and potential adverse patient effects of prescribing errors involving medication dosage forms . DESIGN Evaluation of all detected medication prescribing errors involving or related to medication dosage forms in a 631-bed tertiary care teaching hospital. MAIN OUTCOME MEASURES Type, frequency, and potential for adverse effects of prescribing errors involving or related to medication dosage forms. RESULTS A total of 1,115 clinically significant prescribing errors involving medication dosage forms were detected during the 60-month study period. The annual number of detected errors increased throughout the study period. Detailed analysis of the 402 errors detected during the last 16 months of the study demonstrated the most common errors to be: failure to specify controlled release formulation (total of 280 cases; 69.7%) both when prescribing using the brand name (148 cases; 36.8%) and when prescribing using the generic name (132 cases; 32.8%); and prescribing controlled delivery formulations to be administered per tube (48 cases; 11.9%). The potential for adverse patient outcome was rated as potentially “fatal or severe” in 3 cases (0.7%), and “serious” in 49 cases (12.2%). Errors most commonly involved cardiovascular agents (208 cases; 51.7%). CONCLUSIONS Hospitalized patients are at risk for adverse outcomes due to prescribing errors related to inappropriate use of medication dosage forms. This information should be considered in the development of strategies to prevent adverse patient outcomes resulting from such errors. PMID:12213138
Qi, Yulin; Geib, Timon; Schorr, Pascal; Meier, Florian; Volmer, Dietrich A
2015-01-15
Isobaric interferences in human serum can potentially influence the measured concentration levels of 25-hydroxyvitamin D [25(OH)D], when low resolving power liquid chromatography/tandem mass spectrometry (LC/MS/MS) instruments and non-specific MS/MS product ions are employed for analysis. In this study, we provide a detailed characterization of these interferences and a technical solution to reduce the associated systematic errors. Detailed electrospray ionization Fourier transform ion cyclotron resonance (FTICR) high-resolution mass spectrometry (HRMS) experiments were used to characterize co-extracted isobaric components of 25(OH)D from human serum. Differential ion mobility spectrometry (DMS), as a gas-phase ion filter, was implemented on a triple quadrupole mass spectrometer for separation of the isobars. HRMS revealed the presence of multiple isobaric compounds in extracts of human serum for different sample preparation methods. Several of these isobars had the potential to increase the peak areas measured for 25(OH)D on low-resolution MS instruments. A major isobaric component was identified as pentaerythritol oleate, a technical lubricant, which was probably an artifact from the analytical instrumentation. DMS was able to remove several of these isobars prior to MS/MS, when implemented on the low-resolution triple quadrupole mass spectrometer. It was shown in this proof-of-concept study that DMS-MS has the potential to significantly decrease systematic errors, and thus improve accuracy of vitamin D measurements using LC/MS/MS. Copyright © 2014 John Wiley & Sons, Ltd.
Dinges, Eric; Felderman, Nicole; McGuire, Sarah; Gross, Brandie; Bhatia, Sudershan; Mott, Sarah; Buatti, John; Wang, Dongxu
2015-01-01
Background and Purpose This study evaluates the potential efficacy and robustness of functional bone marrow sparing (BMS) using intensity-modulated proton therapy (IMPT) for cervical cancer, with the goal of reducing hematologic toxicity. Material and Methods IMPT plans with prescription dose of 45 Gy were generated for ten patients who have received BMS intensity-modulated x-ray therapy (IMRT). Functional bone marrow was identified by 18F-flourothymidine positron emission tomography. IMPT plans were designed to minimize the volume of functional bone marrow receiving 5–40 Gy while maintaining similar target coverage and healthy organ sparing as IMRT. IMPT robustness was analyzed with ±3% range uncertainty errors and/or ±3mm translational setup errors in all three principal dimensions. Results In the static scenario, the median dose volume reductions for functional bone marrow by IMPT were: 32% for V5GY, 47% for V10Gy, 54% for V20Gy, and 57% for V40Gy, all with p<0.01 compared to IMRT. With assumed errors, even the worst-case reductions by IMPT were: 23% for V5Gy, 37% for V10Gy, 41% for V20Gy, and 39% for V40Gy, all with p<0.01. Conclusions The potential sparing of functional bone marrow by IMPT for cervical cancer is significant and robust under realistic systematic range uncertainties and clinically relevant setup errors. PMID:25981130
Improving accuracy of clinical coding in surgery: collaboration is key.
Heywood, Nick A; Gill, Michael D; Charlwood, Natasha; Brindle, Rachel; Kirwan, Cliona C
2016-08-01
Clinical coding data provide the basis for Hospital Episode Statistics and Healthcare Resource Group codes. High accuracy of this information is required for payment by results, allocation of health and research resources, and public health data and planning. We sought to identify the level of accuracy of clinical coding in general surgical admissions across hospitals in the Northwest of England. Clinical coding departments identified a total of 208 emergency general surgical patients discharged between 1st March and 15th August 2013 from seven hospital trusts (median = 20, range = 16-60). Blinded re-coding was performed by a senior clinical coder and clinician, with results compared with the original coding outcome. Recorded codes were generated from OPCS-4 & ICD-10. Of all cases, 194 of 208 (93.3%) had at least one coding error and 9 of 208 (4.3%) had errors in both primary diagnosis and primary procedure. Errors were found in 64 of 208 (30.8%) of primary diagnoses and 30 of 137 (21.9%) of primary procedure codes. Median tariff using original codes was £1411.50 (range, £409-9138). Re-calculation using updated clinical codes showed a median tariff of £1387.50, P = 0.997 (range, £406-10,102). The most frequent reasons for incorrect coding were "coder error" and a requirement for "clinical interpretation of notes". Errors in clinical coding are multifactorial and have significant impact on primary diagnosis, potentially affecting the accuracy of Hospital Episode Statistics data and in turn the allocation of health care resources and public health planning. As we move toward surgeon specific outcomes, surgeons should increase collaboration with coding departments to ensure the system is robust. Copyright © 2016 Elsevier Inc. All rights reserved.
Radar error statistics for the space shuttle
NASA Technical Reports Server (NTRS)
Lear, W. M.
1979-01-01
Radar error statistics of C-band and S-band that are recommended for use with the groundtracking programs to process space shuttle tracking data are presented. The statistics are divided into two parts: bias error statistics, using the subscript B, and high frequency error statistics, using the subscript q. Bias errors may be slowly varying to constant. High frequency random errors (noise) are rapidly varying and may or may not be correlated from sample to sample. Bias errors were mainly due to hardware defects and to errors in correction for atmospheric refraction effects. High frequency noise was mainly due to hardware and due to atmospheric scintillation. Three types of atmospheric scintillation were identified: horizontal, vertical, and line of sight. This was the first time that horizontal and line of sight scintillations were identified.
Exposed and embedded corrections in aphasia therapy: issues of voice and identity.
Simmons-Mackie, Nina; Damico, Jack S
2008-01-01
Because communication after the onset of aphasia can be fraught with errors, therapist corrections are pervasive in therapy for aphasia. Although corrections are designed to improve the accuracy of communication, some corrections can have social and emotional consequences during interactions. That is, exposure of errors can potentially silence the 'voice' of a speaker by orienting to an utterance as unacceptable. Although corrections can marginalize speakers with aphasia, the practice has not been widely investigated. A qualitative study of corrections during aphasia therapy was undertaken to describe corrections in therapy, identify patterns of occurrence, and develop hypotheses regarding the potential effects of corrections. Videotapes of six individual and five group aphasia therapy sessions were analysed. Sequences consistent with a definition of a therapist 'correction' were identified. Corrections were defined as instances when the therapist offered a 'fix' for a perceived error in the client's talk even though the intent was apparent. Two categories of correction were identified and were consistent with Jefferson's (1987) descriptions of exposed and embedded corrections. Exposed corrections involved explicit correcting by the therapist, while embedded corrections occurred implicitly within the ongoing talk. Patterns of occurrence appeared consistent with philosophical orientations of therapy sessions. Exposed corrections were more prevalent in sessions focusing on repairing deficits, while embedded corrections were prevalent in sessions focusing on natural communication events (e.g. conversation). In addition, exposed corrections were sometimes used when client offerings were plausible or appropriate, but were inconsistent with therapist expectations. The observation that some instances of exposed corrections effectively silenced the voice or self-expression of the person with aphasia has significant implications for outcomes from aphasia therapy. By focusing on accurate productions versus communicative intents, therapy runs the risk of reducing self-esteem and communicative confidence, as well as reinforcing a sense of 'helplessness' and disempowerment among people with aphasia. The results suggest that clinicians should carefully calibrate the use of exposed and embedded corrections to balance linguistic and psychosocial goals.
Wong, Aaron L; Shelhamer, Mark
2014-05-01
Adaptive processes are crucial in maintaining the accuracy of body movements and rely on error storage and processing mechanisms. Although classically studied with adaptation paradigms, evidence of these ongoing error-correction mechanisms should also be detectable in other movements. Despite this connection, current adaptation models are challenged when forecasting adaptation ability with measures of baseline behavior. On the other hand, we have previously identified an error-correction process present in a particular form of baseline behavior, the generation of predictive saccades. This process exhibits long-term intertrial correlations that decay gradually (as a power law) and are best characterized with the tools of fractal time series analysis. Since this baseline task and adaptation both involve error storage and processing, we sought to find a link between the intertrial correlations of the error-correction process in predictive saccades and the ability of subjects to alter their saccade amplitudes during an adaptation task. Here we find just such a relationship: the stronger the intertrial correlations during prediction, the more rapid the acquisition of adaptation. This reinforces the links found previously between prediction and adaptation in motor control and suggests that current adaptation models are inadequate to capture the complete dynamics of these error-correction processes. A better understanding of the similarities in error processing between prediction and adaptation might provide the means to forecast adaptation ability with a baseline task. This would have many potential uses in physical therapy and the general design of paradigms of motor adaptation. Copyright © 2014 the American Physiological Society.
Vilà-Balló, Adrià; Hdez-Lafuente, Prado; Rostan, Carles; Cunillera, Toni; Rodriguez-Fornells, Antoni
2014-10-01
Performance monitoring is crucial for well-adapted behavior. Offenders typically have a pervasive repetition of harmful-impulsive behaviors, despite an awareness of the negative consequences of their actions. However, the link between performance monitoring and aggressive behavior in juvenile offenders has not been closely investigated. Event-related brain potentials (ERPs) were used to investigate performance monitoring in juvenile non-psychopathic violent offenders compared with a well-matched control group. Two ERP components associated with error monitoring, error-related negativity (ERN) and error-positivity (Pe), and two components related to inhibitory processing, the stop-N2 and stop-P3 components, were evaluated using a combined flanker-stop-signal task. The results showed that the amplitudes of the ERN, the stop-N2, the stop-P3, and the standard P3 components were clearly reduced in the offenders group. Remarkably, no differences were observed for the Pe. At the behavioral level, slower stop-signal reaction times were identified for offenders, which indicated diminished inhibitory processing. The present results suggest that the monitoring of one's own behavior is affected in juvenile violent offenders. Specifically, we determined that different aspects of executive function were affected in the studied offenders, including error processing (reduced ERN) and response inhibition (reduced N2 and P3). However, error awareness and compensatory post-error adjustment processes (error correction) were unaffected. The current pattern of results highlights the role of performance monitoring in the acquisition and maintenance of externalizing harmful behavior that is frequently observed in juvenile offenders. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Shaw, Susan
2013-02-01
The purpose of this discussion paper is to identify major technical errors made by Whittaker and McShane (2012) regarding the development and use of SLPSTAB (Shaw and Vaugeois, 1999; Vaugeois, 2000). SLPSTAB is a GIS-based data layer currently utilized as a regulatory tool for preliminarily screening slope stability potential on nonfederal, commercial timberlands in Washington State.
ERIC Educational Resources Information Center
Nushi, Musa
2016-01-01
Han's (2009, 2013) selective fossilization hypothesis (SFH) claims that L1 markedness and L2 input robustness determine the fossilizability (and learnability) of an L2 feature. To test the validity of the model, a pseudo-longitudinal study was designed in which the errors in the argumentative essays of 52 Iranian EFL learners were identified and…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Huiying; Deng, Zhiqun; Carlson, Thomas J.
2012-10-19
Tidal power has been identified as one of the most potential commercial-scale renewable energy sources. Puget Sound, Washington, is a potential site to deploy tidal power generating devices. The risk of injury for killer whales needs to be managed before the deployment of these types of devices can be approved by regulating authorities. A passive acoustic system consisting of two star arrays, each with four hydrophones, was designed and implemented for the detection and localization of Southern Resident killer whales. Deployment of the passive acoustic system was conducted at Sequim Bay, Washington. A total of nine test locations were chosen,more » within a radius of 250 m around the star arrays, to test our localization approach. For the localization algorithm, a least square solver was applied to obtain a bearing location from each star array. The final source location was determined by the intersection of the bearings given by each of the two star arrays. Bearing and distance errors were obtained to conduct comparison between the calculated and true (from Global Positioning System) locations. The results indicated that bearing errors were within 1.04º for eight of the test locations; one location had bearing errors slightly larger than expected due to the strong background noise at that position. For the distance errors, six of the test locations were within the range of 1.91 to 32.36 m. The other two test locations were near the intersection line between the centers of the two star arrays, which were expected to have large errors from the theoretical sensitivity analysis performed.« less
Avoidable errors in dealing with anaphylactoid reactions to iodinated contrast media.
Segal, Arthur J; Bush, William H
2011-03-01
Contrast reactions are much less common today than in the past. This is principally because of the current and predominant use of low and iso-osmolar contrast media compared with the prior use of high osmolality contrast media. As a result of the significantly diminished frequency, there are now fewer opportunities for physicians to recognize and appropriately treat such adverse reactions. In review of the literature combined with our own clinical and legal experience, 12 potential errors were identified and these are reviewed in detail so that they can be avoided by the physician-in-charge. Basic treatment considerations are presented along with a plan to systematize an approach to contrast reactions, simplify treatment options and plans, and schedule periodic drills.
NASA Technical Reports Server (NTRS)
1975-01-01
A system is presented which processes FORTRAN based software systems to surface potential problems before they become execution malfunctions. The system complements the diagnostic capabilities of compilers, loaders, and execution monitors rather than duplicating these functions. Also, it emphasizes frequent sources of FORTRAN problems which require inordinate manual effort to identify. The principle value of the system is extracting small sections of unusual code from the bulk of normal sequences. Code structures likely to cause immediate or future problems are brought to the user's attention. These messages stimulate timely corrective action of solid errors and promote identification of 'tricky' code. Corrective action may require recoding or simply extending software documentation to explain the unusual technique.
Sykut-Cegielska, Jolanta
2015-01-01
Alkaptonuria is a rare inborn error of metabolism, identified over a century ago. But its basic pathomechanism (i.e. ochronosis) is still not completely explained. Though clinical onset of osteoarthropathy and complications from other organs (including: heart and blood vessels, skin, eyes, kidneys) occurs at adult age, the symptoms are progressive, cause severe pains and significantly limit everyday life of the patients. Until now no effective therapeutic methods have been known in alkaptonuria. Recently, thanks to an initiative of the international patient organization for alkaptonuria, a hope for a potential treatment availability, appears. So, alkaptonuria is an example of a role of multidysciplinary care, cooperation and ongoing progress in the area of rare diseases.
Fuzzy risk analysis of a modern γ-ray industrial irradiator.
Castiglia, F; Giardina, M
2011-06-01
Fuzzy fault tree analyses were used to investigate accident scenarios that involve radiological exposure to operators working in industrial γ-ray irradiation facilities. The HEART method, a first generation human reliability analysis method, was used to evaluate the probability of adverse human error in these analyses. This technique was modified on the basis of fuzzy set theory to more directly take into account the uncertainties in the error-promoting factors on which the methodology is based. Moreover, with regard to some identified accident scenarios, fuzzy radiological exposure risk, expressed in terms of potential annual death, was evaluated. The calculated fuzzy risks for the examined plant were determined to be well below the reference risk suggested by International Commission on Radiological Protection.
McEachan, Rosemary R C; Giles, Sally J; Sirriyeh, Reema; Watt, Ian S; Wright, John
2012-01-01
Objective The aim of this systematic review was to develop a ‘contributory factors framework’ from a synthesis of empirical work which summarises factors contributing to patient safety incidents in hospital settings. Design A mixed-methods systematic review of the literature was conducted. Data sources Electronic databases (Medline, PsycInfo, ISI Web of knowledge, CINAHL and EMBASE), article reference lists, patient safety websites, registered study databases and author contacts. Eligibility criteria Studies were included that reported data from primary research in secondary care aiming to identify the contributory factors to error or threats to patient safety. Results 1502 potential articles were identified. 95 papers (representing 83 studies) which met the inclusion criteria were included, and 1676 contributory factors extracted. Initial coding of contributory factors by two independent reviewers resulted in 20 domains (eg, team factors, supervision and leadership). Each contributory factor was then coded by two reviewers to one of these 20 domains. The majority of studies identified active failures (errors and violations) as factors contributing to patient safety incidents. Individual factors, communication, and equipment and supplies were the other most frequently reported factors within the existing evidence base. Conclusions This review has culminated in an empirically based framework of the factors contributing to patient safety incidents. This framework has the potential to be applied across hospital settings to improve the identification and prevention of factors that cause harm to patients. PMID:22421911
Predictors of Errors of Novice Java Programmers
ERIC Educational Resources Information Center
Bringula, Rex P.; Manabat, Geecee Maybelline A.; Tolentino, Miguel Angelo A.; Torres, Edmon L.
2012-01-01
This descriptive study determined which of the sources of errors would predict the errors committed by novice Java programmers. Descriptive statistics revealed that the respondents perceived that they committed the identified eighteen errors infrequently. Thought error was perceived to be the main source of error during the laboratory programming…
Prevalence and cost of hospital medical errors in the general and elderly United States populations.
Mallow, Peter J; Pandya, Bhavik; Horblyuk, Ruslan; Kaplan, Harold S
2013-12-01
The primary objective of this study was to quantify the differences in the prevalence rate and costs of hospital medical errors between the general population and an elderly population aged ≥65 years. Methods from an actuarial study of medical errors were modified to identify medical errors in the Premier Hospital Database using data from 2009. Visits with more than four medical errors were removed from the population to avoid over-estimation of cost. Prevalence rates were calculated based on the total number of inpatient visits. There were 3,466,596 total inpatient visits in 2009. Of these, 1,230,836 (36%) occurred in people aged ≥ 65. The prevalence rate was 49 medical errors per 1000 inpatient visits in the general cohort and 79 medical errors per 1000 inpatient visits for the elderly cohort. The top 10 medical errors accounted for more than 80% of the total in the general cohort and the 65+ cohort. The most costly medical error for the general population was postoperative infection ($569,287,000). Pressure ulcers were most costly ($347,166,257) in the elderly population. This study was conducted with a hospital administrative database, and assumptions were necessary to identify medical errors in the database. Further, there was no method to identify errors of omission or misdiagnoses within the database. This study indicates that prevalence of hospital medical errors for the elderly is greater than the general population and the associated cost of medical errors in the elderly population is quite substantial. Hospitals which further focus their attention on medical errors in the elderly population may see a significant reduction in costs due to medical errors as a disproportionate percentage of medical errors occur in this age group.
Identification and correction of systematic error in high-throughput sequence data
2011-01-01
Background A feature common to all DNA sequencing technologies is the presence of base-call errors in the sequenced reads. The implications of such errors are application specific, ranging from minor informatics nuisances to major problems affecting biological inferences. Recently developed "next-gen" sequencing technologies have greatly reduced the cost of sequencing, but have been shown to be more error prone than previous technologies. Both position specific (depending on the location in the read) and sequence specific (depending on the sequence in the read) errors have been identified in Illumina and Life Technology sequencing platforms. We describe a new type of systematic error that manifests as statistically unlikely accumulations of errors at specific genome (or transcriptome) locations. Results We characterize and describe systematic errors using overlapping paired reads from high-coverage data. We show that such errors occur in approximately 1 in 1000 base pairs, and that they are highly replicable across experiments. We identify motifs that are frequent at systematic error sites, and describe a classifier that distinguishes heterozygous sites from systematic error. Our classifier is designed to accommodate data from experiments in which the allele frequencies at heterozygous sites are not necessarily 0.5 (such as in the case of RNA-Seq), and can be used with single-end datasets. Conclusions Systematic errors can easily be mistaken for heterozygous sites in individuals, or for SNPs in population analyses. Systematic errors are particularly problematic in low coverage experiments, or in estimates of allele-specific expression from RNA-Seq data. Our characterization of systematic error has allowed us to develop a program, called SysCall, for identifying and correcting such errors. We conclude that correction of systematic errors is important to consider in the design and interpretation of high-throughput sequencing experiments. PMID:22099972
DOE Office of Scientific and Technical Information (OSTI.GOV)
Passarge, M; Fix, M K; Manser, P
Purpose: To create and test an accurate EPID-frame-based VMAT QA metric to detect gross dose errors in real-time and to provide information about the source of error. Methods: A Swiss cheese model was created for an EPID-based real-time QA process. The system compares a treatmentplan- based reference set of EPID images with images acquired over each 2° gantry angle interval. The metric utilizes a sequence of independent consecutively executed error detection Methods: a masking technique that verifies infield radiation delivery and ensures no out-of-field radiation; output normalization checks at two different stages; global image alignment to quantify rotation, scaling andmore » translation; standard gamma evaluation (3%, 3 mm) and pixel intensity deviation checks including and excluding high dose gradient regions. Tolerances for each test were determined. For algorithm testing, twelve different types of errors were selected to modify the original plan. Corresponding predictions for each test case were generated, which included measurement-based noise. Each test case was run multiple times (with different noise per run) to assess the ability to detect introduced errors. Results: Averaged over five test runs, 99.1% of all plan variations that resulted in patient dose errors were detected within 2° and 100% within 4° (∼1% of patient dose delivery). Including cases that led to slightly modified but clinically equivalent plans, 91.5% were detected by the system within 2°. Based on the type of method that detected the error, determination of error sources was achieved. Conclusion: An EPID-based during-treatment error detection system for VMAT deliveries was successfully designed and tested. The system utilizes a sequence of methods to identify and prevent gross treatment delivery errors. The system was inspected for robustness with realistic noise variations, demonstrating that it has the potential to detect a large majority of errors in real-time and indicate the error source. J. V. Siebers receives funding support from Varian Medical Systems.« less
Adamo, Margaret Peggy; Boten, Jessica A; Coyle, Linda M; Cronin, Kathleen A; Lam, Clara J K; Negoita, Serban; Penberthy, Lynne; Stevens, Jennifer L; Ward, Kevin C
2017-02-15
Researchers have used prostate-specific antigen (PSA) values collected by central cancer registries to evaluate tumors for potential aggressive clinical disease. An independent study collecting PSA values suggested a high error rate (18%) related to implied decimal points. To evaluate the error rate in the Surveillance, Epidemiology, and End Results (SEER) program, a comprehensive review of PSA values recorded across all SEER registries was performed. Consolidated PSA values for eligible prostate cancer cases in SEER registries were reviewed and compared with text documentation from abstracted records. Four types of classification errors were identified: implied decimal point errors, abstraction or coding implementation errors, nonsignificant errors, and changes related to "unknown" values. A total of 50,277 prostate cancer cases diagnosed in 2012 were reviewed. Approximately 94.15% of cases did not have meaningful changes (85.85% correct, 5.58% with a nonsignificant change of <1 ng/mL, and 2.80% with no clinical change). Approximately 5.70% of cases had meaningful changes (1.93% due to implied decimal point errors, 1.54% due to abstract or coding errors, and 2.23% due to errors related to unknown categories). Only 419 of the original 50,277 cases (0.83%) resulted in a change in disease stage due to a corrected PSA value. The implied decimal error rate was only 1.93% of all cases in the current validation study, with a meaningful error rate of 5.81%. The reasons for the lower error rate in SEER are likely due to ongoing and rigorous quality control and visual editing processes by the central registries. The SEER program currently is reviewing and correcting PSA values back to 2004 and will re-release these data in the public use research file. Cancer 2017;123:697-703. © 2016 American Cancer Society. © 2016 The Authors. Cancer published by Wiley Periodicals, Inc. on behalf of American Cancer Society.
Differences in Error Detection Skills by Band and Choral Preservice Teachers
ERIC Educational Resources Information Center
Stambaugh, Laura A.
2016-01-01
Band and choral preservice teachers (N = 44) studied band and choral scores, listened to recordings of school ensembles, and identified errors in the recordings. Results indicated that preservice teachers identified significantly more errors when listening to recordings of their primary area (band majors listening to band, p = 0.045; choral majors…
A Framework for Identifying and Classifying Undergraduate Student Proof Errors
ERIC Educational Resources Information Center
Strickland, S.; Rand, B.
2016-01-01
This paper describes a framework for identifying, classifying, and coding student proofs, modified from existing proof-grading rubrics. The framework includes 20 common errors, as well as categories for interpreting the severity of the error. The coding scheme is intended for use in a classroom context, for providing effective student feedback. In…
Meurier, C E
2000-07-01
Human errors are common in clinical practice, but they are under-reported. As a result, very little is known of the types, antecedents and consequences of errors in nursing practice. This limits the potential to learn from errors and to make improvement in the quality and safety of nursing care. The aim of this study was to use an Organizational Accident Model to analyse critical incidents of errors in nursing. Twenty registered nurses were invited to produce a critical incident report of an error (which had led to an adverse event or potentially could have led to an adverse event) they had made in their professional practice and to write down their responses to the error using a structured format. Using Reason's Organizational Accident Model, supplemental information was then collected from five of the participants by means of an individual in-depth interview to explore further issues relating to the incidents they had reported. The detailed analysis of one of the incidents is discussed in this paper, demonstrating the effectiveness of this approach in providing insight into the chain of events which may lead to an adverse event. The case study approach using critical incidents of clinical errors was shown to provide relevant information regarding the interaction of organizational factors, local circumstances and active failures (errors) in producing an adverse or potentially adverse event. It is suggested that more use should be made of this approach to understand how errors are made in practice and to take appropriate preventative measures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lon N. Haney; David I. Gertman
2003-04-01
Beginning in the 1980s a primary focus of human reliability analysis was estimation of human error probabilities. However, detailed qualitative modeling with comprehensive representation of contextual variables often was lacking. This was likely due to the lack of comprehensive error and performance shaping factor taxonomies, and the limited data available on observed error rates and their relationship to specific contextual variables. In the mid 90s Boeing, America West Airlines, NASA Ames Research Center and INEEL partnered in a NASA sponsored Advanced Concepts grant to: assess the state of the art in human error analysis, identify future needs for human errormore » analysis, and develop an approach addressing these needs. Identified needs included the need for a method to identify and prioritize task and contextual characteristics affecting human reliability. Other needs identified included developing comprehensive taxonomies to support detailed qualitative modeling and to structure meaningful data collection efforts across domains. A result was the development of the FRamework Assessing Notorious Contributing Influences for Error (FRANCIE) with a taxonomy for airline maintenance tasks. The assignment of performance shaping factors to generic errors by experts proved to be valuable to qualitative modeling. Performance shaping factors and error types from such detailed approaches can be used to structure error reporting schemes. In a recent NASA Advanced Human Support Technology grant FRANCIE was refined, and two new taxonomies for use on space missions were developed. The development, sharing, and use of error taxonomies, and the refinement of approaches for increased fidelity of qualitative modeling is offered as a means to help direct useful data collection strategies.« less
An experimental study of fault propagation in a jet-engine controller. M.S. Thesis
NASA Technical Reports Server (NTRS)
Choi, Gwan Seung
1990-01-01
An experimental analysis of the impact of transient faults on a microprocessor-based jet engine controller, used in the Boeing 747 and 757 aircrafts is described. A hierarchical simulation environment which allows the injection of transients during run-time and the tracing of their impact is described. Verification of the accuracy of this approach is also provided. A determination of the probability that a transient results in latch, pin or functional errors is made. Given a transient fault, there is approximately an 80 percent chance that there is no impact on the chip. An empirical model to depict the process of error exploration and degeneration in the target system is derived. The model shows that, if no latch errors occur within eight clock cycles, no significant damage is likely to happen. Thus, the overall impact of a transient is well contained. A state transition model is also derived from the measured data, to describe the error propagation characteristics within the chip, and to quantify the impact of transients on the external environment. The model is used to identify and isolate the critical fault propagation paths, the module most sensitive to fault propagation and the module with the highest potential of causing external pin errors.
Financial errors in dementia: Testing a neuroeconomic conceptual framework
Chiong, Winston; Hsu, Ming; Wudka, Danny; Miller, Bruce L.; Rosen, Howard J.
2013-01-01
Financial errors by patients with dementia can have devastating personal and family consequences. We developed and evaluated a neuroeconomic conceptual framework for understanding financial errors across different dementia syndromes, using a systematic, retrospective, blinded chart review of demographically-balanced cohorts of patients with Alzheimer’s disease (AD, n=100) and behavioral variant frontotemporal dementia (bvFTD, n=50). Reviewers recorded specific reports of financial errors according to a conceptual framework identifying patient cognitive and affective characteristics, and contextual influences, conferring susceptibility to each error. Specific financial errors were reported for 49% of AD and 70% of bvFTD patients (p = 0.012). AD patients were more likely than bvFTD patients to make amnestic errors (p< 0.001), while bvFTD patients were more likely to spend excessively (p = 0.004) and to exhibit other behaviors consistent with diminished sensitivity to losses and other negative outcomes (p< 0.001). Exploratory factor analysis identified a social/affective vulnerability factor associated with errors in bvFTD, and a cognitive vulnerability factor associated with errors in AD. Our findings highlight the frequency and functional importance of financial errors as symptoms of AD and bvFTD. A conceptual model derived from neuroeconomic literature identifies factors that influence vulnerability to different types of financial error in different dementia syndromes, with implications for early diagnosis and subsequent risk prevention. PMID:23550884
Risk-Aware Planetary Rover Operation: Autonomous Terrain Classification and Path Planning
NASA Technical Reports Server (NTRS)
Ono, Masahiro; Fuchs, Thoams J.; Steffy, Amanda; Maimone, Mark; Yen, Jeng
2015-01-01
Identifying and avoiding terrain hazards (e.g., soft soil and pointy embedded rocks) are crucial for the safety of planetary rovers. This paper presents a newly developed groundbased Mars rover operation tool that mitigates risks from terrain by automatically identifying hazards on the terrain, evaluating their risks, and suggesting operators safe paths options that avoids potential risks while achieving specified goals. The tool will bring benefits to rover operations by reducing operation cost, by reducing cognitive load of rover operators, by preventing human errors, and most importantly, by significantly reducing the risk of the loss of rovers.
MO-FG-202-05: Identifying Treatment Planning System Errors in IROC-H Phantom Irradiations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerns, J; Followill, D; Howell, R
Purpose: Treatment Planning System (TPS) errors can affect large numbers of cancer patients receiving radiation therapy. Using an independent recalculation system, the Imaging and Radiation Oncology Core-Houston (IROC-H) can identify institutions that have not sufficiently modelled their linear accelerators in their TPS model. Methods: Linear accelerator point measurement data from IROC-H’s site visits was aggregated and analyzed from over 30 linear accelerator models. Dosimetrically similar models were combined to create “classes”. The class data was used to construct customized beam models in an independent treatment dose verification system (TVS). Approximately 200 head and neck phantom plans from 2012 to 2015more » were recalculated using this TVS. Comparison of plan accuracy was evaluated by comparing the measured dose to the institution’s TPS dose as well as the TVS dose. In cases where the TVS was more accurate than the institution by an average of >2%, the institution was identified as having a non-negligible TPS error. Results: Of the ∼200 recalculated plans, the average improvement using the TVS was ∼0.1%; i.e. the recalculation, on average, slightly outperformed the institution’s TPS. Of all the recalculated phantoms, 20% were identified as having a non-negligible TPS error. Fourteen plans failed current IROC-H criteria; the average TVS improvement of the failing plans was ∼3% and 57% were found to have non-negligible TPS errors. Conclusion: IROC-H has developed an independent recalculation system to identify institutions that have considerable TPS errors. A large number of institutions were found to have non-negligible TPS errors. Even institutions that passed IROC-H criteria could be identified as having a TPS error. Resolution of such errors would improve dose delivery for a large number of IROC-H phantoms and ultimately, patients.« less
Sexual assault consultations - from high risk to high reliability.
Cunningham, Nicola
2012-02-01
The sexual assault consultation is a high-risk procedure with the potential for errors resulting in harm to both patients and staff. As such, it can be likened to practices in highrisk industries such as aviation and surgery. In contrast to these domains however, the focus on performance safety and Threat and Error Management has not been widely adopted. This is despite a growing recognition of the vulnerabilities of the investigative and prosecutorial stages of alleged sexual assaults. In the context of “high risk” sexual assault consultations, the notion of safety refers not only to the risk of patient morbidity and mortality, but also to physical, psychological and judicial outcomes that affect patients, staff, and the wider community. This article identifies the latent threats present in sexual assault consultations and suggests a conceptual framework for application of Threat and Error Management in this specialised area of medicine. This will enable practitioners to be better equipped to recognise the risks and improve the performance and safety of sexual assault consultation processes. In an era of growing medicolegal concerns regarding issues such as environmental safety and the potential for contamination of cases, focussing on education and safety culture components within the investigative systems will allow sexual assault consultation processes to progress towards a new level of organisational reliability.
Mahony, Mary C; Patterson, Patricia; Hayward, Brooke; North, Robert; Green, Dawne
2015-05-01
To demonstrate, using human factors engineering (HFE), that a redesigned, pre-filled, ready-to-use, pre-asembled follitropin alfa pen can be used to administer prescribed follitropin alfa doses safely and accurately. A failure modes and effects analysis identified hazards and harms potentially caused by use errors; risk-control measures were implemented to ensure acceptable device use risk management. Participants were women with infertility, their significant others, and fertility nurse (FN) professionals. Preliminary testing included 'Instructions for Use' (IFU) and pre-validation studies. Validation studies used simulated injections in a representative use environment; participants received prior training on pen use. User performance in preliminary testing led to IFU revisions and a change to outer needle cap design to mitigate needle stick potential. In the first validation study (49 users, 343 simulated injections), in the FN group, one observed critical use error resulted in a device design modification and another in an IFU change. A second validation study tested the mitigation strategies; previously reported use errors were not repeated. Through an iterative process involving a series of studies, modifications were made to the pen design and IFU. Simulated-use testing demonstrated that the redesigned pen can be used to administer follitropin alfa effectively and safely.
NASA Astrophysics Data System (ADS)
Swan, B.; Laverdiere, M.; Yang, L.
2017-12-01
In the past five years, deep Convolutional Neural Networks (CNN) have been increasingly favored for computer vision applications due to their high accuracy and ability to generalize well in very complex problems; however, details of how they function and in turn how they may be optimized are still imperfectly understood. In particular, their complex and highly nonlinear network architecture, including many hidden layers and self-learned parameters, as well as their mathematical implications, presents open questions about how to effectively select training data. Without knowledge of the exact ways the model processes and transforms its inputs, intuition alone may fail as a guide to selecting highly relevant training samples. Working in the context of improving a CNN-based building extraction model used for the LandScan USA gridded population dataset, we have approached this problem by developing a semi-supervised, highly-scalable approach to select training samples from a dataset of identified commission errors. Due to the large scope this project, tens of thousands of potential samples could be derived from identified commission errors. To efficiently trim those samples down to a manageable and effective set for creating additional training sample, we statistically summarized the spectral characteristics of areas with rates of commission errors at the image tile level and grouped these tiles using affinity propagation. Highly representative members of each commission error cluster were then used to select sites for training sample creation. The model will be incrementally re-trained with the new training data to allow for an assessment of how the addition of different types of samples affects the model performance, such as precision and recall rates. By using quantitative analysis and data clustering techniques to select highly relevant training samples, we hope to improve model performance in a manner that is resource efficient, both in terms of training process and in sample creation.
Preventing Unintended Disclosure of Personally Identifiable Data Following Anonymisation.
Smith, Chris
2017-01-01
Errors and anomalies during the capture and processing of health data have the potential to place personally identifiable values into attributes of a dataset that are expected to contain non-identifiable values. Anonymisation focuses on those attributes that have been judged to enable identification of individuals. Attributes that are judged to contain non-identifiable values are not considered, but may be included in datasets that are shared by organisations. Consequently, organisations are at risk of sharing datasets that unintendedly disclose personally identifiable values through these attributes. This would have ethical and legal implications for organisations and privacy implications for individuals whose personally identifiable values are disclosed. In this paper, we formulate the problem of unintended disclosure following anonymisation, describe the necessary steps to address this problem, and discuss some key challenges to applying these steps in practice.
Accuracy Study of a Robotic System for MRI-guided Prostate Needle Placement
Seifabadi, Reza; Cho, Nathan BJ.; Song, Sang-Eun; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M.; Fichtinger, Gabor; Iordachita, Iulian
2013-01-01
Background Accurate needle placement is the first concern in percutaneous MRI-guided prostate interventions. In this phantom study, different sources contributing to the overall needle placement error of a MRI-guided robot for prostate biopsy have been identified, quantified, and minimized to the possible extent. Methods and Materials The overall needle placement error of the system was evaluated in a prostate phantom. This error was broken into two parts: the error associated with the robotic system (called before-insertion error) and the error associated with needle-tissue interaction (called due-to-insertion error). The before-insertion error was measured directly in a soft phantom and different sources contributing into this part were identified and quantified. A calibration methodology was developed to minimize the 4-DOF manipulator’s error. The due-to-insertion error was indirectly approximated by comparing the overall error and the before-insertion error. The effect of sterilization on the manipulator’s accuracy and repeatability was also studied. Results The average overall system error in phantom study was 2.5 mm (STD=1.1mm). The average robotic system error in super soft phantom was 1.3 mm (STD=0.7 mm). Assuming orthogonal error components, the needle-tissue interaction error was approximated to be 2.13 mm thus having larger contribution to the overall error. The average susceptibility artifact shift was 0.2 mm. The manipulator’s targeting accuracy was 0.71 mm (STD=0.21mm) after robot calibration. The robot’s repeatability was 0.13 mm. Sterilization had no noticeable influence on the robot’s accuracy and repeatability. Conclusions The experimental methodology presented in this paper may help researchers to identify, quantify, and minimize different sources contributing into the overall needle placement error of an MRI-guided robotic system for prostate needle placement. In the robotic system analyzed here, the overall error of the studied system remained within the acceptable range. PMID:22678990
Accuracy study of a robotic system for MRI-guided prostate needle placement.
Seifabadi, Reza; Cho, Nathan B J; Song, Sang-Eun; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M; Fichtinger, Gabor; Iordachita, Iulian
2013-09-01
Accurate needle placement is the first concern in percutaneous MRI-guided prostate interventions. In this phantom study, different sources contributing to the overall needle placement error of a MRI-guided robot for prostate biopsy have been identified, quantified and minimized to the possible extent. The overall needle placement error of the system was evaluated in a prostate phantom. This error was broken into two parts: the error associated with the robotic system (called 'before-insertion error') and the error associated with needle-tissue interaction (called 'due-to-insertion error'). Before-insertion error was measured directly in a soft phantom and different sources contributing into this part were identified and quantified. A calibration methodology was developed to minimize the 4-DOF manipulator's error. The due-to-insertion error was indirectly approximated by comparing the overall error and the before-insertion error. The effect of sterilization on the manipulator's accuracy and repeatability was also studied. The average overall system error in the phantom study was 2.5 mm (STD = 1.1 mm). The average robotic system error in the Super Soft plastic phantom was 1.3 mm (STD = 0.7 mm). Assuming orthogonal error components, the needle-tissue interaction error was found to be approximately 2.13 mm, thus making a larger contribution to the overall error. The average susceptibility artifact shift was 0.2 mm. The manipulator's targeting accuracy was 0.71 mm (STD = 0.21 mm) after robot calibration. The robot's repeatability was 0.13 mm. Sterilization had no noticeable influence on the robot's accuracy and repeatability. The experimental methodology presented in this paper may help researchers to identify, quantify and minimize different sources contributing into the overall needle placement error of an MRI-guided robotic system for prostate needle placement. In the robotic system analysed here, the overall error of the studied system remained within the acceptable range. Copyright © 2012 John Wiley & Sons, Ltd.
Medhanyie, Araya Abrha; Spigt, Mark; Yebyo, Henock; Little, Alex; Tadesse, Kidane; Dinant, Geert-Jan; Blanco, Roman
2017-05-01
Mobile phone based applications are considered by many as potentially useful for addressing challenges and improving the quality of data collection in developing countries. Yet very little evidence is available supporting or refuting the potential and widely perceived benefits on the use of electronic forms on smartphones for routine patient data collection by health workers at primary health care facilities. A facility based cross sectional study using a structured paper checklist was prepared to assess the completeness and accuracy of 408 electronic records completed and submitted to a central database server using electronic forms on smartphones by 25 health workers. The 408 electronic records were selected randomly out of a total of 1772 maternal health records submitted by the health workers to the central database over a period of six months. Descriptive frequencies and percentages of data completeness and error rates were calculated. When compared to paper records, the use of electronic forms significantly improved data completeness by 209 (8%) entries. Of a total 2622 entries checked for completeness, 2602 (99.2%) electronic record entries were complete, while 2393 (91.3%) paper record entries were complete. A very small percentage of error rates, which was easily identifiable, occurred in both electronic and paper forms although the error rate in the electronic records was more than double that of paper records (2.8% vs. 1.1%). More than half of entry errors in the electronic records related to entering a text value. With minimal training, supervision, and no incentives, health care workers were able to use electronic forms for patient assessment and routine data collection appropriately and accurately with a very small error rate. Minimising the number of questions requiring text responses in electronic forms would be helpful in minimizing data errors. Copyright © 2017 Elsevier B.V. All rights reserved.
Error-related negativities elicited by monetary loss and cues that predict loss.
Dunning, Jonathan P; Hajcak, Greg
2007-11-19
Event-related potential studies have reported error-related negativity following both error commission and feedback indicating errors or monetary loss. The present study examined whether error-related negativities could be elicited by a predictive cue presented prior to both the decision and subsequent feedback in a gambling task. Participants were presented with a cue that indicated the probability of reward on the upcoming trial (0, 50, and 100%). Results showed a negative deflection in the event-related potential in response to loss cues compared with win cues; this waveform shared a similar latency and morphology with the traditional feedback error-related negativity.
Deep space target location with Hubble Space Telescope (HST) and Hipparcos data
NASA Technical Reports Server (NTRS)
Null, George W.
1988-01-01
Interplanetary spacecraft navigation requires accurate a priori knowledge of target positions. A concept is presented for attaining improved target ephemeris accuracy using two future Earth-orbiting optical observatories, the European Space Agency (ESA) Hipparcos observatory and the Nasa Hubble Space Telescope (HST). Assuming nominal observatory performance, the Hipparcos data reduction will provide an accurate global star catalog, and HST will provide a capability for accurate angular measurements of stars and solar system bodies. The target location concept employs HST to observe solar system bodies relative to Hipparcos catalog stars and to determine the orientation (frame tie) of these stars to compact extragalactic radio sources. The target location process is described, the major error sources discussed, the potential target ephemeris error predicted, and mission applications identified. Preliminary results indicate that ephemeris accuracy comparable to the errors in individual Hipparcos catalog stars may be possible with a more extensive HST observing program. Possible future ground and spacebased replacements for Hipparcos and HST astrometric capabilities are also discussed.
Folks, Russell D; Garcia, Ernest V; Taylor, Andrew T
2007-03-01
Quantitative nuclear renography has numerous potential sources of error. We previously reported the initial development of a computer software module for comprehensively addressing the issue of quality control (QC) in the analysis of radionuclide renal images. The objective of this study was to prospectively test the QC software. The QC software works in conjunction with standard quantitative renal image analysis using a renal quantification program. The software saves a text file that summarizes QC findings as possible errors in user-entered values, calculated values that may be unreliable because of the patient's clinical condition, and problems relating to acquisition or processing. To test the QC software, a technologist not involved in software development processed 83 consecutive nontransplant clinical studies. The QC findings of the software were then tabulated. QC events were defined as technical (study descriptors that were out of range or were entered and then changed, unusually sized or positioned regions of interest, or missing frames in the dynamic image set) or clinical (calculated functional values judged to be erroneous or unreliable). Technical QC events were identified in 36 (43%) of 83 studies. Clinical QC events were identified in 37 (45%) of 83 studies. Specific QC events included starting the camera after the bolus had reached the kidney, dose infiltration, oversubtraction of background activity, and missing frames in the dynamic image set. QC software has been developed to automatically verify user input, monitor calculation of renal functional parameters, summarize QC findings, and flag potentially unreliable values for the nuclear medicine physician. Incorporation of automated QC features into commercial or local renal software can reduce errors and improve technologist performance and should improve the efficiency and accuracy of image interpretation.
Bredfeldt, Christine E; Butani, Amy; Padmanabhan, Sandhyasree; Hitz, Paul; Pardee, Roy
2013-03-22
Multi-site health sciences research is becoming more common, as it enables investigation of rare outcomes and diseases and new healthcare innovations. Multi-site research usually involves the transfer of large amounts of research data between collaborators, which increases the potential for accidental disclosures of protected health information (PHI). Standard protocols for preventing release of PHI are extremely vulnerable to human error, particularly when the shared data sets are large. To address this problem, we developed an automated program (SAS macro) to identify possible PHI in research data before it is transferred between research sites. The macro reviews all data in a designated directory to identify suspicious variable names and data patterns. The macro looks for variables that may contain personal identifiers such as medical record numbers and social security numbers. In addition, the macro identifies dates and numbers that may identify people who belong to small groups, who may be identifiable even in the absences of traditional identifiers. Evaluation of the macro on 100 sample research data sets indicated a recall of 0.98 and precision of 0.81. When implemented consistently, the macro has the potential to streamline the PHI review process and significantly reduce accidental PHI disclosures.
Classification and reduction of pilot error
NASA Technical Reports Server (NTRS)
Rogers, W. H.; Logan, A. L.; Boley, G. D.
1989-01-01
Human error is a primary or contributing factor in about two-thirds of commercial aviation accidents worldwide. With the ultimate goal of reducing pilot error accidents, this contract effort is aimed at understanding the factors underlying error events and reducing the probability of certain types of errors by modifying underlying factors such as flight deck design and procedures. A review of the literature relevant to error classification was conducted. Classification includes categorizing types of errors, the information processing mechanisms and factors underlying them, and identifying factor-mechanism-error relationships. The classification scheme developed by Jens Rasmussen was adopted because it provided a comprehensive yet basic error classification shell or structure that could easily accommodate addition of details on domain-specific factors. For these purposes, factors specific to the aviation environment were incorporated. Hypotheses concerning the relationship of a small number of underlying factors, information processing mechanisms, and error types types identified in the classification scheme were formulated. ASRS data were reviewed and a simulation experiment was performed to evaluate and quantify the hypotheses.
Clarification of terminology in medication errors: definitions and classification.
Ferner, Robin E; Aronson, Jeffrey K
2006-01-01
We have previously described and analysed some terms that are used in drug safety and have proposed definitions. Here we discuss and define terms that are used in the field of medication errors, particularly terms that are sometimes misunderstood or misused. We also discuss the classification of medication errors. A medication error is a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient. Errors can be classified according to whether they are mistakes, slips, or lapses. Mistakes are errors in the planning of an action. They can be knowledge based or rule based. Slips and lapses are errors in carrying out an action - a slip through an erroneous performance and a lapse through an erroneous memory. Classification of medication errors is important because the probabilities of errors of different classes are different, as are the potential remedies.
Sensitivity and specificity of dosing alerts for dosing errors among hospitalized pediatric patients
Stultz, Jeremy S; Porter, Kyle; Nahata, Milap C
2014-01-01
Objectives To determine the sensitivity and specificity of a dosing alert system for dosing errors and to compare the sensitivity of a proprietary system with and without institutional customization at a pediatric hospital. Methods A retrospective analysis of medication orders, orders causing dosing alerts, reported adverse drug events, and dosing errors during July, 2011 was conducted. Dosing errors with and without alerts were identified and the sensitivity of the system with and without customization was compared. Results There were 47 181 inpatient pediatric orders during the studied period; 257 dosing errors were identified (0.54%). The sensitivity of the system for identifying dosing errors was 54.1% (95% CI 47.8% to 60.3%) if customization had not occurred and increased to 60.3% (CI 54.0% to 66.3%) with customization (p=0.02). The sensitivity of the system for underdoses was 49.6% without customization and 60.3% with customization (p=0.01). Specificity of the customized system for dosing errors was 96.2% (CI 96.0% to 96.3%) with a positive predictive value of 8.0% (CI 6.8% to 9.3). All dosing errors had an alert over-ridden by the prescriber and 40.6% of dosing errors with alerts were administered to the patient. The lack of indication-specific dose ranges was the most common reason why an alert did not occur for a dosing error. Discussion Advances in dosing alert systems should aim to improve the sensitivity and positive predictive value of the system for dosing errors. Conclusions The dosing alert system had a low sensitivity and positive predictive value for dosing errors, but might have prevented dosing errors from reaching patients. Customization increased the sensitivity of the system for dosing errors. PMID:24496386
Interventions to reduce medication errors in neonatal care: a systematic review
Nguyen, Minh-Nha Rhylie; Mosel, Cassandra
2017-01-01
Background: Medication errors represent a significant but often preventable cause of morbidity and mortality in neonates. The objective of this systematic review was to determine the effectiveness of interventions to reduce neonatal medication errors. Methods: A systematic review was undertaken of all comparative and noncomparative studies published in any language, identified from searches of PubMed and EMBASE and reference-list checking. Eligible studies were those investigating the impact of any medication safety interventions aimed at reducing medication errors in neonates in the hospital setting. Results: A total of 102 studies were identified that met the inclusion criteria, including 86 comparative and 16 noncomparative studies. Medication safety interventions were classified into six themes: technology (n = 38; e.g. electronic prescribing), organizational (n = 16; e.g. guidelines, policies, and procedures), personnel (n = 13; e.g. staff education), pharmacy (n = 9; e.g. clinical pharmacy service), hazard and risk analysis (n = 8; e.g. error detection tools), and multifactorial (n = 18; e.g. any combination of previous interventions). Significant variability was evident across all included studies, with differences in intervention strategies, trial methods, types of medication errors evaluated, and how medication errors were identified and evaluated. Most studies demonstrated an appreciable risk of bias. The vast majority of studies (>90%) demonstrated a reduction in medication errors. A similar median reduction of 50–70% in medication errors was evident across studies included within each of the identified themes, but findings varied considerably from a 16% increase in medication errors to a 100% reduction in medication errors. Conclusion: While neonatal medication errors can be reduced through multiple interventions aimed at improving the medication use process, no single intervention appeared clearly superior. Further research is required to evaluate the relative cost-effectiveness of the various medication safety interventions to facilitate decisions regarding uptake and implementation into clinical practice. PMID:29387337
NASA Technical Reports Server (NTRS)
Bundick, W. T.
1985-01-01
The application of the Generalized Likelihood Ratio technique to the detection and identification of aircraft control element failures has been evaluated in a linear digital simulation of the longitudinal dynamics of a B-737 aircraft. Simulation results show that the technique has potential but that the effects of wind turbulence and Kalman filter model errors are problems which must be overcome.
Multi-Unit Considerations for Human Reliability Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
St. Germain, S.; Boring, R.; Banaseanu, G.
This paper uses the insights from the Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) methodology to help identify human actions currently modeled in the single unit PSA that may need to be modified to account for additional challenges imposed by a multi-unit accident as well as identify possible new human actions that might be modeled to more accurately characterize multi-unit risk. In identifying these potential human action impacts, the use of the SPAR-H strategy to include both errors in diagnosis and errors in action is considered as well as identifying characteristics of a multi-unit accident scenario that may impact themore » selection of the performance shaping factors (PSFs) used in SPAR-H. The lessons learned from the Fukushima Daiichi reactor accident will be addressed to further help identify areas where improved modeling may be required. While these multi-unit impacts may require modifications to a Level 1 PSA model, it is expected to have much more importance for Level 2 modeling. There is little currently written specifically about multi-unit HRA issues. A review of related published research will be presented. While this paper cannot answer all issues related to multi-unit HRA, it will hopefully serve as a starting point to generate discussion and spark additional ideas towards the proper treatment of HRA in a multi-unit PSA.« less
Development and content validation of performance assessments for endoscopic third ventriculostomy.
Breimer, Gerben E; Haji, Faizal A; Hoving, Eelco W; Drake, James M
2015-08-01
This study aims to develop and establish the content validity of multiple expert rating instruments to assess performance in endoscopic third ventriculostomy (ETV), collectively called the Neuro-Endoscopic Ventriculostomy Assessment Tool (NEVAT). The important aspects of ETV were identified through a review of current literature, ETV videos, and discussion with neurosurgeons, fellows, and residents. Three assessment measures were subsequently developed: a procedure-specific checklist (CL), a CL of surgical errors, and a global rating scale (GRS). Neurosurgeons from various countries, all identified as experts in ETV, were then invited to participate in a modified Delphi survey to establish the content validity of these instruments. In each Delphi round, experts rated their agreement including each procedural step, error, and GRS item in the respective instruments on a 5-point Likert scale. Seventeen experts agreed to participate in the study and completed all Delphi rounds. After item generation, a total of 27 procedural CL items, 26 error CL items, and 9 GRS items were posed to Delphi panelists for rating. An additional 17 procedural CL items, 12 error CL items, and 1 GRS item were added by panelists. After three rounds, strong consensus (>80% agreement) was achieved on 35 procedural CL items, 29 error CL items, and 10 GRS items. Moderate consensus (50-80% agreement) was achieved on an additional 7 procedural CL items and 1 error CL item. The final procedural and error checklist contained 42 and 30 items, respectively (divided into setup, exposure, navigation, ventriculostomy, and closure). The final GRS contained 10 items. We have established the content validity of three ETV assessment measures by iterative consensus of an international expert panel. Each measure provides unique assessment information and thus can be used individually or in combination, depending on the characteristics of the learner and the purpose of the assessment. These instruments must now be evaluated in both the simulated and operative settings, to determine their construct validity and reliability. Ultimately, the measures contained in the NEVAT may prove suitable for formative assessment during ETV training and potentially as summative assessment measures during certification.
Masked and unmasked error-related potentials during continuous control and feedback
NASA Astrophysics Data System (ADS)
Lopes Dias, Catarina; Sburlea, Andreea I.; Müller-Putz, Gernot R.
2018-06-01
The detection of error-related potentials (ErrPs) in tasks with discrete feedback is well established in the brain–computer interface (BCI) field. However, the decoding of ErrPs in tasks with continuous feedback is still in its early stages. Objective. We developed a task in which subjects have continuous control of a cursor’s position by means of a joystick. The cursor’s position was shown to the participants in two different modalities of continuous feedback: normal and jittered. The jittered feedback was created to mimic the instability that could exist if participants controlled the trajectory directly with brain signals. Approach. This paper studies the electroencephalographic (EEG)—measurable signatures caused by a loss of control over the cursor’s trajectory, causing a target miss. Main results. In both feedback modalities, time-locked potentials revealed the typical frontal-central components of error-related potentials. Errors occurring during the jittered feedback (masked errors) were delayed in comparison to errors occurring during normal feedback (unmasked errors). Masked errors displayed lower peak amplitudes than unmasked errors. Time-locked classification analysis allowed a good distinction between correct and error classes (average Cohen-, average TPR = 81.8% and average TNR = 96.4%). Time-locked classification analysis between masked error and unmasked error classes revealed results at chance level (average Cohen-, average TPR = 60.9% and average TNR = 58.3%). Afterwards, we performed asynchronous detection of ErrPs, combining both masked and unmasked trials. The asynchronous detection of ErrPs in a simulated online scenario resulted in an average TNR of 84.0% and in an average TPR of 64.9%. Significance. The time-locked classification results suggest that the masked and unmasked errors were indistinguishable in terms of classification. The asynchronous classification results suggest that the feedback modality did not hinder the asynchronous detection of ErrPs.
Human error in airway facilities.
DOT National Transportation Integrated Search
2001-01-01
This report examines human errors in Airway Facilities (AF) with the intent of preventing these errors from being : passed on to the new Operations Control Centers. To effectively manage errors, they first have to be identified. : Human factors engin...
Algorithmic Classification of Five Characteristic Types of Paraphasias.
Fergadiotis, Gerasimos; Gorman, Kyle; Bedrick, Steven
2016-12-01
This study was intended to evaluate a series of algorithms developed to perform automatic classification of paraphasic errors (formal, semantic, mixed, neologistic, and unrelated errors). We analyzed 7,111 paraphasias from the Moss Aphasia Psycholinguistics Project Database (Mirman et al., 2010) and evaluated the classification accuracy of 3 automated tools. First, we used frequency norms from the SUBTLEXus database (Brysbaert & New, 2009) to differentiate nonword errors and real-word productions. Then we implemented a phonological-similarity algorithm to identify phonologically related real-word errors. Last, we assessed the performance of a semantic-similarity criterion that was based on word2vec (Mikolov, Yih, & Zweig, 2013). Overall, the algorithmic classification replicated human scoring for the major categories of paraphasias studied with high accuracy. The tool that was based on the SUBTLEXus frequency norms was more than 97% accurate in making lexicality judgments. The phonological-similarity criterion was approximately 91% accurate, and the overall classification accuracy of the semantic classifier ranged from 86% to 90%. Overall, the results highlight the potential of tools from the field of natural language processing for the development of highly reliable, cost-effective diagnostic tools suitable for collecting high-quality measurement data for research and clinical purposes.
NASA Astrophysics Data System (ADS)
Breitkopf, Sven; Lilienfein, Nikolai; Achtnich, Timon; Zwyssig, Christof; Tünnermann, Andreas; Pupeza, Ioachim; Limpert, Jens
2018-06-01
Compact, ultra-high-speed self-bearing permanent-magnet motors enable a wide scope of applications including an increasing number of optical ones. For implementation in an optical setup, the rotors have to satisfy high demands regarding their velocity and pointing errors. Only a restricted number of measurements of these parameters exist and only at relatively low velocities. This manuscript presents the measurement of the velocity and pointing errors at rotation frequencies up to 5 kHz. The acquired data allow us to identify the rotor drive as the main source of velocity variations with fast fluctuations of up to 3.4 ns (RMS) and slow drifts of 23 ns (RMS) over ˜120 revolutions at 5 kHz in vacuum. At the same rotation frequency, the pointing fluctuated by 12 μrad (RMS) and 33 μrad (peak-to-peak) over ˜10 000 round trips. To our best knowledge, this states the first measurement of velocity and pointing errors at multi-kHz rotation frequencies and will allow potential adopters to evaluate the feasibility of such rotor drives for their application.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, F; Cao, N; Young, L
2014-06-15
Purpose: Though FMEA (Failure Mode and Effects Analysis) is becoming more widely adopted for risk assessment in radiation therapy, to our knowledge it has never been validated against actual incident learning data. The objective of this study was to perform an FMEA analysis of an SBRT (Stereotactic Body Radiation Therapy) treatment planning process and validate this against data recorded within an incident learning system. Methods: FMEA on the SBRT treatment planning process was carried out by a multidisciplinary group including radiation oncologists, medical physicists, and dosimetrists. Potential failure modes were identified through a systematic review of the workflow process. Failuremore » modes were rated for severity, occurrence, and detectability on a scale of 1 to 10 and RPN (Risk Priority Number) was computed. Failure modes were then compared with historical reports identified as relevant to SBRT planning within a departmental incident learning system that had been active for two years. Differences were identified. Results: FMEA identified 63 failure modes. RPN values for the top 25% of failure modes ranged from 60 to 336. Analysis of the incident learning database identified 33 reported near-miss events related to SBRT planning. FMEA failed to anticipate 13 of these events, among which 3 were registered with severity ratings of severe or critical in the incident learning system. Combining both methods yielded a total of 76 failure modes, and when scored for RPN the 13 events missed by FMEA ranked within the middle half of all failure modes. Conclusion: FMEA, though valuable, is subject to certain limitations, among them the limited ability to anticipate all potential errors for a given process. This FMEA exercise failed to identify a significant number of possible errors (17%). Integration of FMEA with retrospective incident data may be able to render an improved overview of risks within a process.« less
NASA Astrophysics Data System (ADS)
Zhang, Fan; Liu, Pinkuan
2018-04-01
In order to improve the inspection precision of the H-drive air-bearing stage for wafer inspection, in this paper the geometric error of the stage is analyzed and compensated. The relationship between the positioning errors and error sources are initially modeled, and seven error components are identified that are closely related to the inspection accuracy. The most effective factor that affects the geometric error is identified by error sensitivity analysis. Then, the Spearman rank correlation method is applied to find the correlation between different error components, aiming at guiding the accuracy design and error compensation of the stage. Finally, different compensation methods, including the three-error curve interpolation method, the polynomial interpolation method, the Chebyshev polynomial interpolation method, and the B-spline interpolation method, are employed within the full range of the stage, and their results are compared. Simulation and experiment show that the B-spline interpolation method based on the error model has better compensation results. In addition, the research result is valuable for promoting wafer inspection accuracy and will greatly benefit the semiconductor industry.
How scientific experiments are designed: Problem solving in a knowledge-rich, error-rich environment
NASA Astrophysics Data System (ADS)
Baker, Lisa M.
While theory formation and the relation between theory and data has been investigated in many studies of scientific reasoning, researchers have focused less attention on reasoning about experimental design, even though the experimental design process makes up a large part of real-world scientists' reasoning. The goal of this thesis was to provide a cognitive account of the scientific experimental design process by analyzing experimental design as problem-solving behavior (Newell & Simon, 1972). Three specific issues were addressed: the effect of potential error on experimental design strategies, the role of prior knowledge in experimental design, and the effect of characteristics of the space of alternate hypotheses on alternate hypothesis testing. A two-pronged in vivo/in vitro research methodology was employed, in which transcripts of real-world scientific laboratory meetings were analyzed as well as undergraduate science and non-science majors' design of biology experiments in the psychology laboratory. It was found that scientists use a specific strategy to deal with the possibility of error in experimental findings: they include "known" control conditions in their experimental designs both to determine whether error is occurring and to identify sources of error. The known controls strategy had not been reported in earlier studies with science-like tasks, in which participants' responses to error had consisted of replicating experiments and discounting results. With respect to prior knowledge: scientists and undergraduate students drew on several types of knowledge when designing experiments, including theoretical knowledge, domain-specific knowledge of experimental techniques, and domain-general knowledge of experimental design strategies. Finally, undergraduate science students generated and tested alternates to their favored hypotheses when the space of alternate hypotheses was constrained and searchable. This result may help explain findings of confirmation bias in earlier studies using science-like tasks, in which characteristics of the alternate hypothesis space may have made it unfeasible for participants to generate and test alternate hypotheses. In general, scientists and science undergraduates were found to engage in a systematic experimental design process that responded to salient features of the problem environment, including the constant potential for experimental error, availability of alternate hypotheses, and access to both theoretical knowledge and knowledge of experimental techniques.
Bindoff, I; Stafford, A; Peterson, G; Kang, B H; Tenni, P
2012-08-01
Drug-related problems (DRPs) are of serious concern worldwide, particularly for the elderly who often take many medications simultaneously. Medication reviews have been demonstrated to improve medication usage, leading to reductions in DRPs and potential savings in healthcare costs. However, medication reviews are not always of a consistently high standard, and there is often room for improvement in the quality of their findings. Our aim was to produce computerized intelligent decision support software that can improve the consistency and quality of medication review reports, by helping to ensure that DRPs relevant to a patient are overlooked less frequently. A system that largely achieved this goal was previously published, but refinements have been made. This paper examines the results of both the earlier and newer systems. Two prototype multiple-classification ripple-down rules medication review systems were built, the second being a refinement of the first. Each of the systems was trained incrementally using a human medication review expert. The resultant knowledge bases were analysed and compared, showing factors such as accuracy, time taken to train, and potential errors avoided. The two systems performed well, achieving accuracies of approximately 80% and 90%, after being trained on only a small number of cases (126 and 244 cases, respectively). Through analysis of the available data, it was estimated that without the system intervening, the expert training the first prototype would have missed approximately 36% of potentially relevant DRPs, and the second 43%. However, the system appeared to prevent the majority of these potential expert errors by correctly identifying the DRPs for them, leaving only an estimated 8% error rate for the first expert and 4% for the second. These intelligent decision support systems have shown a clear potential to substantially improve the quality and consistency of medication reviews, which should in turn translate into improved medication usage if they were implemented into routine use. © 2011 Blackwell Publishing Ltd.
Effects of structural error on the estimates of parameters of dynamical systems
NASA Technical Reports Server (NTRS)
Hadaegh, F. Y.; Bekey, G. A.
1986-01-01
In this paper, the notion of 'near-equivalence in probability' is introduced for identifying a system in the presence of several error sources. Following some basic definitions, necessary and sufficient conditions for the identifiability of parameters are given. The effects of structural error on the parameter estimates for both the deterministic and stochastic cases are considered.
Kostopoulou, Olga; Delaney, Brendan
2007-04-01
To classify events of actual or potential harm to primary care patients using a multilevel taxonomy of cognitive and system factors. Observational study of patient safety events obtained via a confidential but not anonymous reporting system. Reports were followed up with interviews where necessary. Events were analysed for their causes and contributing factors using causal trees and were classified using the taxonomy. Five general medical practices in the West Midlands were selected to represent a range of sizes and types of patient population. All practice staff were invited to report patient safety events. Main outcome measures were frequencies of clinical types of events reported, cognitive types of error, types of detection and contributing factors; and relationship between types of error, practice size, patient consequences and detection. 78 reports were relevant to patient safety and analysable. They included 21 (27%) adverse events and 50 (64%) near misses. 16.7% (13/71) had serious patient consequences, including one death. 75.7% (59/78) had the potential for serious patient harm. Most reports referred to administrative errors (25.6%, 20/78). 60% (47/78) of the reports contained sufficient information to characterise cognition: "situation assessment and response selection" was involved in 45% (21/47) of these reports and was often linked to serious potential consequences. The most frequent contributing factor was work organisation, identified in 71 events. This included excessive task demands (47%, 37/71) and fragmentation (28%, 22/71). Even though most reported events were near misses, events with serious patient consequences were also reported. Failures in situation assessment and response selection, a cognitive activity that occurs in both clinical and administrative tasks, was related to serious potential harm.
Kostopoulou, Olga; Delaney, Brendan
2007-01-01
Objective To classify events of actual or potential harm to primary care patients using a multilevel taxonomy of cognitive and system factors. Methods Observational study of patient safety events obtained via a confidential but not anonymous reporting system. Reports were followed up with interviews where necessary. Events were analysed for their causes and contributing factors using causal trees and were classified using the taxonomy. Five general medical practices in the West Midlands were selected to represent a range of sizes and types of patient population. All practice staff were invited to report patient safety events. Main outcome measures were frequencies of clinical types of events reported, cognitive types of error, types of detection and contributing factors; and relationship between types of error, practice size, patient consequences and detection. Results 78 reports were relevant to patient safety and analysable. They included 21 (27%) adverse events and 50 (64%) near misses. 16.7% (13/71) had serious patient consequences, including one death. 75.7% (59/78) had the potential for serious patient harm. Most reports referred to administrative errors (25.6%, 20/78). 60% (47/78) of the reports contained sufficient information to characterise cognition: “situation assessment and response selection” was involved in 45% (21/47) of these reports and was often linked to serious potential consequences. The most frequent contributing factor was work organisation, identified in 71 events. This included excessive task demands (47%, 37/71) and fragmentation (28%, 22/71). Conclusions Even though most reported events were near misses, events with serious patient consequences were also reported. Failures in situation assessment and response selection, a cognitive activity that occurs in both clinical and administrative tasks, was related to serious potential harm. PMID:17403753
Overview of medical errors and adverse events
2012-01-01
Safety is a global concept that encompasses efficiency, security of care, reactivity of caregivers, and satisfaction of patients and relatives. Patient safety has emerged as a major target for healthcare improvement. Quality assurance is a complex task, and patients in the intensive care unit (ICU) are more likely than other hospitalized patients to experience medical errors, due to the complexity of their conditions, need for urgent interventions, and considerable workload fluctuation. Medication errors are the most common medical errors and can induce adverse events. Two approaches are available for evaluating and improving quality-of-care: the room-for-improvement model, in which problems are identified, plans are made to resolve them, and the results of the plans are measured; and the monitoring model, in which quality indicators are defined as relevant to potential problems and then monitored periodically. Indicators that reflect structures, processes, or outcomes have been developed by medical societies. Surveillance of these indicators is organized at the hospital or national level. Using a combination of methods improves the results. Errors are caused by combinations of human factors and system factors, and information must be obtained on how people make errors in the ICU environment. Preventive strategies are more likely to be effective if they rely on a system-based approach, in which organizational flaws are remedied, rather than a human-based approach of encouraging people not to make errors. The development of a safety culture in the ICU is crucial to effective prevention and should occur before the evaluation of safety programs, which are more likely to be effective when they involve bundles of measures. PMID:22339769
Effects of errors and gaps in spatial data sets on assessment of conservation progress.
Visconti, P; Di Marco, M; Álvarez-Romero, J G; Januchowski-Hartley, S R; Pressey, R L; Weeks, R; Rondinini, C
2013-10-01
Data on the location and extent of protected areas, ecosystems, and species' distributions are essential for determining gaps in biodiversity protection and identifying future conservation priorities. However, these data sets always come with errors in the maps and associated metadata. Errors are often overlooked in conservation studies, despite their potential negative effects on the reported extent of protection of species and ecosystems. We used 3 case studies to illustrate the implications of 3 sources of errors in reporting progress toward conservation objectives: protected areas with unknown boundaries that are replaced by buffered centroids, propagation of multiple errors in spatial data, and incomplete protected-area data sets. As of 2010, the frequency of protected areas with unknown boundaries in the World Database on Protected Areas (WDPA) caused the estimated extent of protection of 37.1% of the terrestrial Neotropical mammals to be overestimated by an average 402.8% and of 62.6% of species to be underestimated by an average 10.9%. Estimated level of protection of the world's coral reefs was 25% higher when using recent finer-resolution data on coral reefs as opposed to globally available coarse-resolution data. Accounting for additional data sets not yet incorporated into WDPA contributed up to 6.7% of additional protection to marine ecosystems in the Philippines. We suggest ways for data providers to reduce the errors in spatial and ancillary data and ways for data users to mitigate the effects of these errors on biodiversity assessments. © 2013 Society for Conservation Biology.
NASA Astrophysics Data System (ADS)
Gao, Jing; Burt, James E.
2017-12-01
This study investigates the usefulness of a per-pixel bias-variance error decomposition (BVD) for understanding and improving spatially-explicit data-driven models of continuous variables in environmental remote sensing (ERS). BVD is a model evaluation method originated from machine learning and have not been examined for ERS applications. Demonstrated with a showcase regression tree model mapping land imperviousness (0-100%) using Landsat images, our results showed that BVD can reveal sources of estimation errors, map how these sources vary across space, reveal the effects of various model characteristics on estimation accuracy, and enable in-depth comparison of different error metrics. Specifically, BVD bias maps can help analysts identify and delineate model spatial non-stationarity; BVD variance maps can indicate potential effects of ensemble methods (e.g. bagging), and inform efficient training sample allocation - training samples should capture the full complexity of the modeled process, and more samples should be allocated to regions with more complex underlying processes rather than regions covering larger areas. Through examining the relationships between model characteristics and their effects on estimation accuracy revealed by BVD for both absolute and squared errors (i.e. error is the absolute or the squared value of the difference between observation and estimate), we found that the two error metrics embody different diagnostic emphases, can lead to different conclusions about the same model, and may suggest different solutions for performance improvement. We emphasize BVD's strength in revealing the connection between model characteristics and estimation accuracy, as understanding this relationship empowers analysts to effectively steer performance through model adjustments.
Perceived barriers to medical-error reporting: an exploratory investigation.
Uribe, Claudia L; Schweikhart, Sharon B; Pathak, Dev S; Dow, Merrell; Marsh, Gail B
2002-01-01
Medical-error reporting is an essential component for patient safety enhancement. Unfortunately, medical errors are largely underreported across healthcare institutions. This problem can be attributed to different factors and barriers present at organizational and individual levels that ultimately prevent individuals from generating the report. This study explored the factors that affect medical-error reporting among physicians and nurses at a large academic medical center located in the midwest United States. A nominal group session was conducted to identify the most relevant factors that act as barriers for error reporting. These factors were then used to design a questionnaire that explored the likelihood of the factors to act as barriers and their likelihood to be modified. Using these two parameters, the results were analyzed and combined into a Factor Relevance Matrix. The matrix identifies the factors for which immediate actions should be undertaken to improve medical-error reporting (immediate action factors). It also identifies factors that require long-term strategies (long-term strategy factors) as well as factors that the organization should be aware of but that are of lower priority (awareness factors). The strategies outlined in this study may assist healthcare organizations in improving medical-error reporting, as part of the efforts toward patient-safety enhancement. Although factors affecting medical-error reporting may vary between different organizations, the process used in identifying the factors and the Factor Relevance Matrix developed in this study are easily adaptable to any organizational setting.
Boquet, Albert J; Cohen, Tara N; Cabrera, Jennifer S; Litzinger, Tracy L; Captain, Kevin A; Fabian, Michael A; Miles, Steven G; Shappell, Scott A
2016-09-09
Historically, health care has relied on error management techniques to measure and reduce the occurrence of adverse events. This study proposes an alternative approach for identifying and analyzing hazardous events. Whereas previous research has concentrated on investigating individual flow disruptions, we maintain the industry should focus on threat windows, or the accumulation of these disruptions. This methodology, driven by the broken windows theory, allows us to identify process inefficiencies before they manifest and open the door for the occurrence of errors and adverse events. Medical human factors researchers observed disruptions during 34 trauma cases at a Level II trauma center. Data were collected during resuscitation and imaging and were classified using a human factors taxonomy: Realizing Improved Patient Care Through Human-Centered Operating Room Design for Threat Window Analysis (RIPCHORD-TWA). Of the 576 total disruptions observed, communication issues were the most prevalent (28%), followed by interruptions and coordination issues (24% each). Issues related to layout (16%), usability (5%), and equipment (2%) comprised the remainder of the observations. Disruptions involving communication issues were more prevalent during resuscitation, whereas coordination problems were observed more frequently during imaging. Rather than solely investigating errors and adverse events, we propose conceptualizing the accumulation of disruptions in terms of threat windows as a means to analyze potential threats to the integrity of the trauma care system. This approach allows for the improved identification of system weaknesses or threats, affording us the ability to address these inefficiencies and intervene before errors and adverse events may occur.
Ferrarese, Alessia; Gentile, Valentina; Bindi, Marco; Rivelli, Matteo; Cumbo, Jacopo; Solej, Mario; Enrico, Stefano; Martino, Valter
2016-01-01
A well-designed learning curve is essential for the acquisition of laparoscopic skills: but, are there risk factors that can derail the surgical method? From a review of the current literature on the learning curve in laparoscopic surgery, we identified learning curve components in video laparoscopic cholecystectomy; we suggest a learning curve model that can be applied to assess the progress of general surgical residents as they learn and master the stages of video laparoscopic cholecystectomy regardless of type of patient. Electronic databases were interrogated to better define the terms "surgeon", "specialized surgeon", and "specialist surgeon"; we surveyed the literature on surgical residency programs outside Italy to identify learning curve components, influential factors, the importance of tutoring, and the role of reference centers in residency education in surgery. From the definition of acceptable error, self-efficacy, and error classification, we devised a learning curve model that may be applied to training surgical residents in video laparoscopic cholecystectomy. Based on the criteria culled from the literature, the three surgeon categories (general, specialized, and specialist) are distinguished by years of experience, case volume, and error rate; the patients were distinguished for years and characteristics. The training model was constructed as a series of key learning steps in video laparoscopic cholecystectomy. Potential errors were identified and the difficulty of each step was graded using operation-specific characteristics. On completion of each procedure, error checklist scores on procedure-specific performance are tallied to track the learning curve and obtain performance indices of measurement that chart the trainee's progress. The concept of the learning curve in general surgery is disputed. The use of learning steps may enable the resident surgical trainee to acquire video laparoscopic cholecystectomy skills proportional to the instructor's ability, the trainee's own skills, and the safety of the surgical environment. There were no patient characteristics that can derail the methods. With this training scheme, resident trainees may be provided the opportunity to develop their intrinsic capabilities without the loss of basic technical skills.
Neale, Chris; Madill, Chris; Rauscher, Sarah; Pomès, Régis
2013-08-13
All molecular dynamics simulations are susceptible to sampling errors, which degrade the accuracy and precision of observed values. The statistical convergence of simulations containing atomistic lipid bilayers is limited by the slow relaxation of the lipid phase, which can exceed hundreds of nanoseconds. These long conformational autocorrelation times are exacerbated in the presence of charged solutes, which can induce significant distortions of the bilayer structure. Such long relaxation times represent hidden barriers that induce systematic sampling errors in simulations of solute insertion. To identify optimal methods for enhancing sampling efficiency, we quantitatively evaluate convergence rates using generalized ensemble sampling algorithms in calculations of the potential of mean force for the insertion of the ionic side chain analog of arginine in a lipid bilayer. Umbrella sampling (US) is used to restrain solute insertion depth along the bilayer normal, the order parameter commonly used in simulations of molecular solutes in lipid bilayers. When US simulations are modified to conduct random walks along the bilayer normal using a Hamiltonian exchange algorithm, systematic sampling errors are eliminated more rapidly and the rate of statistical convergence of the standard free energy of binding of the solute to the lipid bilayer is increased 3-fold. We compute the ratio of the replica flux transmitted across a defined region of the order parameter to the replica flux that entered that region in Hamiltonian exchange simulations. We show that this quantity, the transmission factor, identifies sampling barriers in degrees of freedom orthogonal to the order parameter. The transmission factor is used to estimate the depth-dependent conformational autocorrelation times of the simulation system, some of which exceed the simulation time, and thereby identify solute insertion depths that are prone to systematic sampling errors and estimate the lower bound of the amount of sampling that is required to resolve these sampling errors. Finally, we extend our simulations and verify that the conformational autocorrelation times estimated by the transmission factor accurately predict correlation times that exceed the simulation time scale-something that, to our knowledge, has never before been achieved.
Developmental Changes in Error Monitoring: An Event-Related Potential Study
ERIC Educational Resources Information Center
Wiersema, Jan R.; van der Meere, Jacob J.; Roeyers, Herbert
2007-01-01
The aim of the study was to investigate the developmental trajectory of error monitoring. For this purpose, children (age 7-8), young adolescents (age 13-14) and adults (age 23-24) performed a Go/No-Go task and were compared on overt reaction time (RT) performance and on event-related potentials (ERPs), thought to reflect error detection…
Understanding overlay signatures using machine learning on non-lithography context information
NASA Astrophysics Data System (ADS)
Overcast, Marshall; Mellegaard, Corey; Daniel, David; Habets, Boris; Erley, Georg; Guhlemann, Steffen; Thrun, Xaver; Buhl, Stefan; Tottewitz, Steven
2018-03-01
Overlay errors between two layers can be caused by non-lithography processes. While these errors can be compensated by the run-to-run system, such process and tool signatures are not always stable. In order to monitor the impact of non-lithography context on overlay at regular intervals, a systematic approach is needed. Using various machine learning techniques, significant context parameters that relate to deviating overlay signatures are automatically identified. Once the most influential context parameters are found, a run-to-run simulation is performed to see how much improvement can be obtained. The resulting analysis shows good potential for reducing the influence of hidden context parameters on overlay performance. Non-lithographic contexts are significant contributors, and their automatic detection and classification will enable the overlay roadmap, given the corresponding control capabilities.
Package Design Affects Accuracy Recognition for Medications.
Endestad, Tor; Wortinger, Laura A; Madsen, Steinar; Hortemo, Sigurd
2016-12-01
Our aim was to test if highlighting and placement of substance name on medication package have the potential to reduce patient errors. An unintentional overdose of medication is a large health issue that might be linked to medication package design. In two experiments, placement, background color, and the active ingredient of generic medication packages were manipulated according to best human factors guidelines to reduce causes of labeling-related patient errors. In two experiments, we compared the original packaging with packages where we varied placement of the name, dose, and background of the active ingredient. Age-relevant differences and the effect of color on medication recognition error were tested. In Experiment 1, 59 volunteers (30 elderly and 29 young students), participated. In Experiment 2, 25 volunteers participated. The most common error was the inability to identify that two different packages contained the same active ingredient (young, 41%, and elderly, 68%). This kind of error decreased with the redesigned packages (young, 8%, and elderly, 16%). Confusion errors related to color design were reduced by two thirds in the redesigned packages compared with original generic medications. Prominent placement of substance name and dose with a band of high-contrast color support recognition of the active substance in medications. A simple modification including highlighting and placing the name of the active ingredient in the upper right-hand corner of the package helps users realize that two different packages can contain the same active substance, thus reducing the risk of inadvertent medication overdose. © 2016, Human Factors and Ergonomics Society.
Underestimation of Low-Dose Radiation in Treatment Planning of Intensity-Modulated Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jang, Si Young; Liu, H. Helen; Mohan, Radhe
2008-08-01
Purpose: To investigate potential dose calculation errors in the low-dose regions and identify causes of such errors for intensity-modulated radiotherapy (IMRT). Methods and Materials: The IMRT treatment plans of 23 patients with lung cancer and mesothelioma were reviewed. Of these patients, 15 had severe pulmonary complications after radiotherapy. Two commercial treatment-planning systems (TPSs) and a Monte Carlo system were used to calculate and compare dose distributions and dose-volume parameters of the target volumes and critical structures. The effect of tissue heterogeneity, multileaf collimator (MLC) modeling, beam modeling, and other factors that could contribute to the differences in IMRT dose calculationsmore » were analyzed. Results: In the commercial TPS-generated IMRT plans, dose calculation errors primarily occurred in the low-dose regions of IMRT plans (<50% of the radiation dose prescribed for the tumor). Although errors in the dose-volume histograms of the normal lung were small (<5%) above 10 Gy, underestimation of dose <10 Gy was found to be up to 25% in patients with mesothelioma or large target volumes. These errors were found to be caused by inadequate modeling of MLC transmission and leaf scatter in commercial TPSs. The degree of low-dose errors depends on the target volumes and the degree of intensity modulation. Conclusions: Secondary radiation from MLCs contributes a significant portion of low dose in IMRT plans. Dose underestimation could occur in conventional IMRT dose calculations if such low-dose radiation is not properly accounted for.« less
Package Design Affects Accuracy Recognition for Medications
Endestad, Tor; Wortinger, Laura A.; Madsen, Steinar; Hortemo, Sigurd
2016-01-01
Objective: Our aim was to test if highlighting and placement of substance name on medication package have the potential to reduce patient errors. Background: An unintentional overdose of medication is a large health issue that might be linked to medication package design. In two experiments, placement, background color, and the active ingredient of generic medication packages were manipulated according to best human factors guidelines to reduce causes of labeling-related patient errors. Method: In two experiments, we compared the original packaging with packages where we varied placement of the name, dose, and background of the active ingredient. Age-relevant differences and the effect of color on medication recognition error were tested. In Experiment 1, 59 volunteers (30 elderly and 29 young students), participated. In Experiment 2, 25 volunteers participated. Results: The most common error was the inability to identify that two different packages contained the same active ingredient (young, 41%, and elderly, 68%). This kind of error decreased with the redesigned packages (young, 8%, and elderly, 16%). Confusion errors related to color design were reduced by two thirds in the redesigned packages compared with original generic medications. Conclusion: Prominent placement of substance name and dose with a band of high-contrast color support recognition of the active substance in medications. Application: A simple modification including highlighting and placing the name of the active ingredient in the upper right-hand corner of the package helps users realize that two different packages can contain the same active substance, thus reducing the risk of inadvertent medication overdose. PMID:27591209
Prescribers' expectations and barriers to electronic prescribing of controlled substances
Kim, Meelee; McDonald, Ann; Kreiner, Peter; Kelleher, Stephen J; Blackman, Michael B; Kaufman, Peter N; Carrow, Grant M
2011-01-01
Objective To better understand barriers associated with the adoption and use of electronic prescribing of controlled substances (EPCS), a practice recently established by US Drug Enforcement Administration regulation. Materials and methods Prescribers of controlled substances affiliated with a regional health system were surveyed regarding current electronic prescribing (e-prescribing) activities, current prescribing of controlled substances, and expectations and barriers to the adoption of EPCS. Results 246 prescribers (response rate of 64%) represented a range of medical specialties, with 43.1% of these prescribers current users of e-prescribing for non-controlled substances. Reported issues with controlled substances included errors, pharmacy call-backs, and diversion; most prescribers expected EPCS to address many of these problems, specifically reduce medical errors, improve work flow and efficiency of practice, help identify prescription diversion or misuse, and improve patient treatment management. Prescribers expected, however, that it would be disruptive to practice, and over one-third of respondents reported that carrying a security authentication token at all times would be so burdensome as to discourage adoption. Discussion Although adoption of e-prescribing has been shown to dramatically reduce medication errors, challenges to efficient processes and errors still persist from the perspective of the prescriber, that may interfere with the adoption of EPCS. Most prescribers regarded EPCS security measures as a small or moderate inconvenience (other than carrying a security token), with advantages outweighing the burden. Conclusion Prescribers are optimistic about the potential for EPCS to improve practice, but view certain security measures as a burden and potential barrier. PMID:21946239
Li, Qi; Melton, Kristin; Lingren, Todd; Kirkendall, Eric S; Hall, Eric; Zhai, Haijun; Ni, Yizhao; Kaiser, Megan; Stoutenborough, Laura; Solti, Imre
2014-01-01
Although electronic health records (EHRs) have the potential to provide a foundation for quality and safety algorithms, few studies have measured their impact on automated adverse event (AE) and medical error (ME) detection within the neonatal intensive care unit (NICU) environment. This paper presents two phenotyping AE and ME detection algorithms (ie, IV infiltrations, narcotic medication oversedation and dosing errors) and describes manual annotation of airway management and medication/fluid AEs from NICU EHRs. From 753 NICU patient EHRs from 2011, we developed two automatic AE/ME detection algorithms, and manually annotated 11 classes of AEs in 3263 clinical notes. Performance of the automatic AE/ME detection algorithms was compared to trigger tool and voluntary incident reporting results. AEs in clinical notes were double annotated and consensus achieved under neonatologist supervision. Sensitivity, positive predictive value (PPV), and specificity are reported. Twelve severe IV infiltrates were detected. The algorithm identified one more infiltrate than the trigger tool and eight more than incident reporting. One narcotic oversedation was detected demonstrating 100% agreement with the trigger tool. Additionally, 17 narcotic medication MEs were detected, an increase of 16 cases over voluntary incident reporting. Automated AE/ME detection algorithms provide higher sensitivity and PPV than currently used trigger tools or voluntary incident-reporting systems, including identification of potential dosing and frequency errors that current methods are unequipped to detect. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Bennett, Jerry M.; Cortes, Peter M.
1985-01-01
The adsorption of water by thermocouple psychrometer assemblies is known to cause errors in the determination of water potential. Experiments were conducted to evaluate the effect of sample size and psychrometer chamber volume on measured water potentials of leaf discs, leaf segments, and sodium chloride solutions. Reasonable agreement was found between soybean (Glycine max L. Merr.) leaf water potentials measured on 5-millimeter radius leaf discs and large leaf segments. Results indicated that while errors due to adsorption may be significant when using small volumes of tissue, if sufficient tissue is used the errors are negligible. Because of the relationship between water potential and volume in plant tissue, the errors due to adsorption were larger with turgid tissue. Large psychrometers which were sealed into the sample chamber with latex tubing appeared to adsorb more water than those sealed with flexible plastic tubing. Estimates are provided of the amounts of water adsorbed by two different psychrometer assemblies and the amount of tissue sufficient for accurate measurements of leaf water potential with these assemblies. It is also demonstrated that water adsorption problems may have generated low water potential values which in prior studies have been attributed to large cut surface area to volume ratios. PMID:16664367
Bennett, J M; Cortes, P M
1985-09-01
The adsorption of water by thermocouple psychrometer assemblies is known to cause errors in the determination of water potential. Experiments were conducted to evaluate the effect of sample size and psychrometer chamber volume on measured water potentials of leaf discs, leaf segments, and sodium chloride solutions. Reasonable agreement was found between soybean (Glycine max L. Merr.) leaf water potentials measured on 5-millimeter radius leaf discs and large leaf segments. Results indicated that while errors due to adsorption may be significant when using small volumes of tissue, if sufficient tissue is used the errors are negligible. Because of the relationship between water potential and volume in plant tissue, the errors due to adsorption were larger with turgid tissue. Large psychrometers which were sealed into the sample chamber with latex tubing appeared to adsorb more water than those sealed with flexible plastic tubing. Estimates are provided of the amounts of water adsorbed by two different psychrometer assemblies and the amount of tissue sufficient for accurate measurements of leaf water potential with these assemblies. It is also demonstrated that water adsorption problems may have generated low water potential values which in prior studies have been attributed to large cut surface area to volume ratios.
Gagnon, Bernadine; Miozzo, Michele
2017-01-01
Purpose This study aimed to test whether an approach to distinguishing errors arising in phonological processing from those arising in motor planning also predicts the extent to which repetition-based training can lead to improved production of difficult sound sequences. Method Four individuals with acquired speech production impairment who produced consonant cluster errors involving deletion were examined using a repetition task. We compared the acoustic details of productions with deletion errors in target consonant clusters to singleton consonants. Changes in accuracy over the course of the study were also compared. Results Two individuals produced deletion errors consistent with a phonological locus of the errors, and 2 individuals produced errors consistent with a motoric locus of the errors. The 2 individuals who made phonologically driven errors showed no change in performance on a repetition training task, whereas the 2 individuals with motoric errors improved in their production of both trained and untrained items. Conclusions The results extend previous findings about a metric for identifying the source of sound production errors in individuals with both apraxia of speech and aphasia. In particular, this work may provide a tool for identifying predominant error types in individuals with complex deficits. PMID:28655044
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barbee, D; McCarthy, A; Galavis, P
Purpose: Errors found during initial physics plan checks frequently require replanning and reprinting, resulting decreased departmental efficiency. Additionally, errors may be missed during physics checks, resulting in potential treatment errors or interruption. This work presents a process control created using the Eclipse Scripting API (ESAPI) enabling dosimetrists and physicists to detect potential errors in the Eclipse treatment planning system prior to performing any plan approvals or printing. Methods: Potential failure modes for five categories were generated based on available ESAPI (v11) patient object properties: Images, Contours, Plans, Beams, and Dose. An Eclipse script plugin (PlanCheck) was written in C# tomore » check errors most frequently observed clinically in each of the categories. The PlanCheck algorithms were devised to check technical aspects of plans, such as deliverability (e.g. minimum EDW MUs), in addition to ensuring that policy and procedures relating to planning were being followed. The effect on clinical workflow efficiency was measured by tracking the plan document error rate and plan revision/retirement rates in the Aria database over monthly intervals. Results: The number of potential failure modes the PlanCheck script is currently capable of checking for in the following categories: Images (6), Contours (7), Plans (8), Beams (17), and Dose (4). Prior to implementation of the PlanCheck plugin, the observed error rates in errored plan documents and revised/retired plans in the Aria database was 20% and 22%, respectively. Error rates were seen to decrease gradually over time as adoption of the script improved. Conclusion: A process control created using the Eclipse scripting API enabled plan checks to occur within the planning system, resulting in reduction in error rates and improved efficiency. Future work includes: initiating full FMEA for planning workflow, extending categories to include additional checks outside of ESAPI via Aria database queries, and eventual automated plan checks.« less
Parametric Modulation of Error-Related ERP Components by the Magnitude of Visuo-Motor Mismatch
ERIC Educational Resources Information Center
Vocat, Roland; Pourtois, Gilles; Vuilleumier, Patrik
2011-01-01
Errors generate typical brain responses, characterized by two successive event-related potentials (ERP) following incorrect action: the error-related negativity (ERN) and the positivity error (Pe). However, it is unclear whether these error-related responses are sensitive to the magnitude of the error, or instead show all-or-none effects. We…
Signals of Opportunity Navigation Using Wi-Fi Signals
2011-03-24
Identifier . . . . . . . . . . . . . . . . . . . . . . . 54 MVM Mean Value Method . . . . . . . . . . . . . . . . . . . . . 60 SDM Scaled Differential...the mean value ( MVM ) and scaled differential (SDM) methods. An error was logged if the UI 60 correlation algorithm identified a packet index that did...Notable from this graph is that a window of 50 packets appears to provide zero errors for MVM and near zero errors for SDM. Also notable is that a
Annotation of Korean Learner Corpora for Particle Error Detection
ERIC Educational Resources Information Center
Lee, Sun-Hee; Jang, Seok Bae; Seo, Sang-Kyu
2009-01-01
In this study, we focus on particle errors and discuss an annotation scheme for Korean learner corpora that can be used to extract heuristic patterns of particle errors efficiently. We investigate different properties of particle errors so that they can be later used to identify learner errors automatically, and we provide resourceful annotation…
Diagnostic Errors in Ambulatory Care: Dimensions and Preventive Strategies
ERIC Educational Resources Information Center
Singh, Hardeep; Weingart, Saul N.
2009-01-01
Despite an increasing focus on patient safety in ambulatory care, progress in understanding and reducing diagnostic errors in this setting lag behind many other safety concerns such as medication errors. To explore the extent and nature of diagnostic errors in ambulatory care, we identified five dimensions of ambulatory care from which errors may…
Horri, J; Cransac, A; Quantin, C; Abrahamowicz, M; Ferdynus, C; Sgro, C; Robillard, P-Y; Iacobelli, S; Gouyon, J-B
2014-12-01
The risk of dosage Prescription Medication Error (PME) among manually written prescriptions within 'mixed' prescribing system (computerized physician order entry (CPOE) + manual prescriptions) has not been previously assessed in neonatology. This study aimed to evaluate the rate of dosage PME related to manual prescriptions in the high-risk population of very preterm infants (GA < 33 weeks) in a mixed prescription system. The study was based on a retrospective review of a random sample of manual daily prescriptions in two neonatal intensive care units (NICU) A and B, located in different French University hospitals (Dijon and La Reunion island). Daily prescription was defined as the set of all drugs manually prescribed on a single day for one patient. Dosage error was defined as a deviation of at least ±10% from the weight-appropriate recommended dose. The analyses were based on the assessment of 676 manually prescribed drugs from NICU A (58 different drugs from 93 newborns and 240 daily prescriptions) and 354 manually prescribed drugs from NICU B (73 different drugs from 131 newborns and 241 daily prescriptions). The dosage error rate per 100 manually prescribed drugs was similar in both NICU: 3·8% (95% CI: 2·5-5·6%) in NICU A and 3·1% (95% CI: 1·6-5·5%) in NICU B (P = 0·54). Among all the 37 identified dosage errors, the over-dosing was almost as frequent as the under-dosing (17 and 20 errors, respectively). Potentially severe dosage errors occurred in a total of seven drug prescriptions. None of the dosage PME was recorded in the corresponding medical files and information on clinical outcome was not sufficient to identify clinical conditions related to dosage PME. Overall, 46·8% of manually prescribed drugs were off label or unlicensed, with no significant differences between prescriptions with or without dosage error. The risk of a dosage PME increased significantly if the drug was included in the CPOE system but was manually prescribed (OR = 3·3; 95% CI: 1·6-7·0, P < 0·001). The presence of dosage PME in the manual prescriptions written within mixed prescription systems suggests that manual prescriptions should be totally avoided in neonatal units. © 2014 John Wiley & Sons Ltd.
Sources of error in the retracted scientific literature.
Casadevall, Arturo; Steen, R Grant; Fang, Ferric C
2014-09-01
Retraction of flawed articles is an important mechanism for correction of the scientific literature. We recently reported that the majority of retractions are associated with scientific misconduct. In the current study, we focused on the subset of retractions for which no misconduct was identified, in order to identify the major causes of error. Analysis of the retraction notices for 423 articles indexed in PubMed revealed that the most common causes of error-related retraction are laboratory errors, analytical errors, and irreproducible results. The most common laboratory errors are contamination and problems relating to molecular biology procedures (e.g., sequencing, cloning). Retractions due to contamination were more common in the past, whereas analytical errors are now increasing in frequency. A number of publications that have not been retracted despite being shown to contain significant errors suggest that barriers to retraction may impede correction of the literature. In particular, few cases of retraction due to cell line contamination were found despite recognition that this problem has affected numerous publications. An understanding of the errors leading to retraction can guide practices to improve laboratory research and the integrity of the scientific literature. Perhaps most important, our analysis has identified major problems in the mechanisms used to rectify the scientific literature and suggests a need for action by the scientific community to adopt protocols that ensure the integrity of the publication process. © FASEB.
Dossett, Lesly A; Kauffmann, Rondi M; Lee, Jay S; Singh, Harkamal; Lee, M Catherine; Morris, Arden M; Jagsi, Reshma; Quinn, Gwendolyn P; Dimick, Justin B
2018-06-01
Our objective was to determine specialist physicians' attitudes and practices regarding disclosure of pre-referral errors. Physicians are encouraged to disclose their own errors to patients. However, no clear professional norms exist regarding disclosure when physicians discover errors in diagnosis or treatment that occurred at other institutions before referral. We conducted semistructured interviews of cancer specialists from 2 National Cancer Institute-designated Cancer Centers. We purposively sampled specialists by discipline, sex, and experience-level who self-described a >50% reliance on external referrals (n = 30). Thematic analysis of verbatim interview transcripts was performed to determine physician attitudes regarding disclosure of pre-referral medical errors; whether and how physicians disclose these errors; and barriers to providing full disclosure. Participants described their experiences identifying different types of pre-referral errors including errors of diagnosis, staging and treatment resulting in adverse events ranging from decreased quality of life to premature death. The majority of specialists expressed the belief that disclosure provided no benefit to patients, and might unnecessarily add to their anxiety about their diagnoses or prognoses. Specialists had varying practices of disclosure including none, non-verbal, partial, event-dependent, and full disclosure. They identified a number of barriers to disclosure, including medicolegal implications and damage to referral relationships, the profession's reputation, and to patient-physician relationships. Specialist physicians identify pre-referral errors but struggle with whether and how to provide disclosure, even when clinical circumstances force disclosure. Education- or communication-based interventions that overcome barriers to disclosing pre-referral errors warrant development.
Byrne, Eamonn; Bury, Gerard
2018-02-08
Incident reporting is vital to identifying pre-hospital medication safety issues because literature suggests that the majority of errors pre-hospital are self-identified. In 2016, the National Ambulance Service (NAS) reported 11 medication errors to the national body with responsibility for risk management and insurance cover. The Health Information and Quality Authority in 2014 stated that reporting of clinical incidents, of which medication errors are a subset, was not felt to be representative of the actual events occurring. Even though reporting systems are in place, the levels appear to be well below what might be expected. Little data is available to explain this apparent discrepancy. To identify, investigate and document the barriers to medication error reporting within the NAS. An independent moderator led four focus groups in March of 2016. A convenience sample of 18 frontline Paramedics and Advanced Paramedics from Cork City and County discussed medication errors and the medication error reporting process. The sessions were recorded and anonymised, and the data was analysed using a process of thematic analysis. Practitioners understood the value of reporting errors. Barriers to reporting included fear of consequences and ridicule, procedural ambiguity, lack of feedback and a perceived lack of both consistency and confidentiality. The perceived consequences for making an error included professional, financial, litigious and psychological. Staff appeared willing to admit errors in a psychologically safe environment. Barriers to reporting are in line with international evidence. Time constraints prevented achievement of thematic saturation. Further study is warranted.
Cooper, P David; Smart, David R
2017-06-01
Recent Australian attempts to facilitate disinvestment in healthcare, by identifying instances of 'inappropriate' care from large Government datasets, are subject to significant methodological flaws. Amongst other criticisms has been the fact that the Government datasets utilized for this purpose correlate poorly with datasets collected by relevant professional bodies. Government data derive from official hospital coding, collected retrospectively by clerical personnel, whilst professional body data derive from unit-specific databases, collected contemporaneously with care by clinical personnel. Assessment of accuracy of official hospital coding data for hyperbaric services in a tertiary referral hospital. All official hyperbaric-relevant coding data submitted to the relevant Australian Government agencies by the Royal Hobart Hospital, Tasmania, Australia for financial year 2010-2011 were reviewed and compared against actual hyperbaric unit activity as determined by reference to original source documents. Hospital coding data contained one or more errors in diagnoses and/or procedures in 70% of patients treated with hyperbaric oxygen that year. Multiple discrete error types were identified, including (but not limited to): missing patients; missing treatments; 'additional' treatments; 'additional' patients; incorrect procedure codes and incorrect diagnostic codes. Incidental observations of errors in surgical, anaesthetic and intensive care coding within this cohort suggest that the problems are not restricted to the specialty of hyperbaric medicine alone. Publications from other centres indicate that these problems are not unique to this institution or State. Current Government datasets are irretrievably compromised and not fit for purpose. Attempting to inform the healthcare policy debate by reference to these datasets is inappropriate. Urgent clinical engagement with hospital coding departments is warranted.
Human error identification for laparoscopic surgery: Development of a motion economy perspective.
Al-Hakim, Latif; Sevdalis, Nick; Maiping, Tanaphon; Watanachote, Damrongpan; Sengupta, Shomik; Dissaranan, Charuspong
2015-09-01
This study postulates that traditional human error identification techniques fail to consider motion economy principles and, accordingly, their applicability in operating theatres may be limited. This study addresses this gap in the literature with a dual aim. First, it identifies the principles of motion economy that suit the operative environment and second, it develops a new error mode taxonomy for human error identification techniques which recognises motion economy deficiencies affecting the performance of surgeons and predisposing them to errors. A total of 30 principles of motion economy were developed and categorised into five areas. A hierarchical task analysis was used to break down main tasks of a urological laparoscopic surgery (hand-assisted laparoscopic nephrectomy) to their elements and the new taxonomy was used to identify errors and their root causes resulting from violation of motion economy principles. The approach was prospectively tested in 12 observed laparoscopic surgeries performed by 5 experienced surgeons. A total of 86 errors were identified and linked to the motion economy deficiencies. Results indicate the developed methodology is promising. Our methodology allows error prevention in surgery and the developed set of motion economy principles could be useful for training surgeons on motion economy principles. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.
NASA Technical Reports Server (NTRS)
Keller, M. (Principal Investigator)
1975-01-01
The author has identified the following significant results. Inherent errors in using nonmetric Skylab photography and office-identified photo control made it necessary to perform numerous block adjustment solutions involving different combinations of control and weights. The final block adjustment was executed holding to 14 of the office-identified photo control points. Solution accuracy was evaluated by comparing the analytically computed ground positions of the withheld photo control points with their known ground positions and also by determining the standard errors of these points from variance values. A horizontal position RMS error of 15 meters was attained. The maximum observed error in position at a control point was 25 meters.
Sources of Response Bias in Older Ethnic Minorities: A Case of Korean American Elderly
Kim, Miyong T.; Ko, Jisook; Yoon, Hyunwoo; Kim, Kim B.; Jang, Yuri
2015-01-01
The present study was undertaken to investigate potential sources of response bias in empirical research involving older ethnic minorities and to identify prudent strategies to reduce those biases, using Korean American elderly (KAE) as an example. Data were obtained from three independent studies of KAE (N=1,297; age ≥60) in three states (Florida, New York, and Maryland) from 2000 to 2008. Two common measures, Pearlin’s Mastery Scale and the CES-D scale, were selected for a series of psychometric tests based on classical measurement theory. Survey items were analyzed in depth, using psychometric properties generated from both exploratory factor analysis and confirmatory factor analysis as well as correlational analysis. Two types of potential sources of bias were identified as the most significant contributors to increases in error variances for these psychological instruments. Error variances were most prominent when (1) items were not presented in a manner that was culturally or contextually congruent with respect to the target population and/or (2) the response anchors for items were mixed (e.g., positive vs. negative). The systemic patterns and magnitudes of the biases were also cross-validated for the three studies. The results demonstrate sources and impacts of measurement biases in studies of older ethnic minorities. The identified response biases highlight the need for re-evaluation of current measurement practices, which are based on traditional recommendations that response anchors should be mixed or that the original wording of instruments should be rigidly followed. Specifically, systematic guidelines for accommodating cultural and contextual backgrounds into instrument design are warranted. PMID:26049971
SU-E-T-635: Process Mapping of Eye Plaque Brachytherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huynh, J; Kim, Y
Purpose: To apply a risk-based assessment and analysis technique (AAPM TG 100) to eye plaque brachytherapy treatment of ocular melanoma. Methods: The role and responsibility of personnel involved in the eye plaque brachytherapy is defined for retinal specialist, radiation oncologist, nurse and medical physicist. The entire procedure was examined carefully. First, major processes were identified and then details for each major process were followed. Results: Seventy-one total potential modes were identified. Eight major processes (corresponding detailed number of modes) are patient consultation (2 modes), pretreatment tumor localization (11), treatment planning (13), seed ordering and calibration (10), eye plaque assembly (10),more » implantation (11), removal (11), and deconstruction (3), respectively. Half of the total modes (36 modes) are related to physicist while physicist is not involved in processes such as during the actual procedure of suturing and removing the plaque. Conclusion: Not only can failure modes arise from physicist-related procedures such as treatment planning and source activity calibration, but it can also exist in more clinical procedures by other medical staff. The improvement of the accurate communication for non-physicist-related clinical procedures could potentially be an approach to prevent human errors. More rigorous physics double check would reduce the error for physicist-related procedures. Eventually, based on this detailed process map, failure mode and effect analysis (FMEA) will identify top tiers of modes by ranking all possible modes with risk priority number (RPN). For those high risk modes, fault tree analysis (FTA) will provide possible preventive action plans.« less
Sources of Response Bias in Older Ethnic Minorities: A Case of Korean American Elderly.
Kim, Miyong T; Lee, Ju-Young; Ko, Jisook; Yoon, Hyunwoo; Kim, Kim B; Jang, Yuri
2015-09-01
The present study was undertaken to investigate potential sources of response bias in empirical research involving older ethnic minorities and to identify prudent strategies to reduce those biases, using Korean American elderly (KAE) as an example. Data were obtained from three independent studies of KAE (N = 1,297; age ≥60) in three states (Florida, New York, and Maryland) from 2000 to 2008. Two common measures, Pearlin's Mastery Scale and the CES-D scale, were selected for a series of psychometric tests based on classical measurement theory. Survey items were analyzed in depth, using psychometric properties generated from both exploratory factor analysis and confirmatory factor analysis as well as correlational analysis. Two types of potential sources of bias were identified as the most significant contributors to increases in error variances for these psychological instruments. Error variances were most prominent when (1) items were not presented in a manner that was culturally or contextually congruent with respect to the target population and/or (2) the response anchors for items were mixed (e.g., positive vs. negative). The systemic patterns and magnitudes of the biases were also cross-validated for the three studies. The results demonstrate sources and impacts of measurement biases in studies of older ethnic minorities. The identified response biases highlight the need for re-evaluation of current measurement practices, which are based on traditional recommendations that response anchors should be mixed or that the original wording of instruments should be rigidly followed. Specifically, systematic guidelines for accommodating cultural and contextual backgrounds into instrument design are warranted.
Human Factors Research Under Ground-Based and Space Conditions. Part 1
NASA Technical Reports Server (NTRS)
1997-01-01
Session TP2 includes short reports concerning: (1) Human Factors Engineering of the International space Station Human Research Facility; (2) Structured Methods for Identifying and Correcting Potential Human Errors in Space operation; (3) An Improved Procedure for Selecting Astronauts for Extended Space Missions; (4) The NASA Performance Assessment Workstation: Cognitive Performance During Head-Down Bedrest; (5) Cognitive Performance Aboard the Life and Microgravity Spacelab; and (6) Psychophysiological Reactivity Under MIR-Simulation and Real Micro-G.
2016-01-01
of data science within DIA and ensure the activities assist and inform DIA’s decisionmakers, analysts , and operators. The research addressed two key...by an analyst or researcher . This type of identifi- cation can be time-consuming and potentially full of errors. GENIE learns from ana- 1 Interview... analysts . The protocol can be found in Appendix A. The protocol was intended to elicit information in five broad research areas. First, we asked a
Best practices for evaluating single nucleotide variant calling methods for microbial genomics
Olson, Nathan D.; Lund, Steven P.; Colman, Rebecca E.; Foster, Jeffrey T.; Sahl, Jason W.; Schupp, James M.; Keim, Paul; Morrow, Jayne B.; Salit, Marc L.; Zook, Justin M.
2015-01-01
Innovations in sequencing technologies have allowed biologists to make incredible advances in understanding biological systems. As experience grows, researchers increasingly recognize that analyzing the wealth of data provided by these new sequencing platforms requires careful attention to detail for robust results. Thus far, much of the scientific Communit’s focus for use in bacterial genomics has been on evaluating genome assembly algorithms and rigorously validating assembly program performance. Missing, however, is a focus on critical evaluation of variant callers for these genomes. Variant calling is essential for comparative genomics as it yields insights into nucleotide-level organismal differences. Variant calling is a multistep process with a host of potential error sources that may lead to incorrect variant calls. Identifying and resolving these incorrect calls is critical for bacterial genomics to advance. The goal of this review is to provide guidance on validating algorithms and pipelines used in variant calling for bacterial genomics. First, we will provide an overview of the variant calling procedures and the potential sources of error associated with the methods. We will then identify appropriate datasets for use in evaluating algorithms and describe statistical methods for evaluating algorithm performance. As variant calling moves from basic research to the applied setting, standardized methods for performance evaluation and reporting are required; it is our hope that this review provides the groundwork for the development of these standards. PMID:26217378
Creating an Oversight Infrastructure for Electronic Health Record-Related Patient Safety Hazards
Singh, Hardeep; Classen, David C.; Sittig, Dean F.
2013-01-01
Electronic health records (EHRs) have potential quality and safety benefits. However, reports of EHR-related safety hazards are now emerging. The Office of the National Coordinator (ONC) for Health Information Technology (HIT) recently sponsored an Institute of Medicine committee to evaluate how HIT use affects patient safety. In this paper, we propose the creation of a national EHR oversight program to provide dedicated surveillance of EHR-related safety hazards and to promote learning from identified errors, close calls, and adverse events. The program calls for data gathering, investigation/analysis and regulatory components. The first two functions will depend on institution-level EHR safety committees that will investigate all known EHR-related adverse events and near-misses and report them nationally using standardized methods. These committees should also perform routine safety self-assessments to proactively identify new risks. Nationally, we propose the long-term creation of a centralized, non-partisan board with an appropriate legal and regulatory infrastructure to ensure the safety of EHRs. We discuss the rationale of the proposed oversight program and its potential organizational components and functions. These include mechanisms for robust data collection and analyses of all safety concerns using multiple methods that extend beyond reporting; multidisciplinary investigation of selected high-risk safety events; and enhanced coordination with other national agencies in order to facilitate broad dissemination of hazards information. Implementation of this proposed infrastructure can facilitate identification of EHR-related adverse events and errors and potentially create a safer and more effective EHR-based health care delivery system. PMID:22080284
Massanari, R M; Wilkerson, K; Streed, S A; Hierholzer, W J
1987-01-01
Proper reporting of discharge diagnoses, including complications of medical care, is essential for maximum recovery of revenues under the prospective reimbursement system. To evaluate the effectiveness of abstracting techniques in identifying nosocomial infections at discharge, discharge abstracts of patients with nosocomial infections were reviewed during September through November of 1984. Patients with nosocomial infections were identified using modified Centers for Disease Control (CDC) definitions and trained surveillance technicians. Records which did not include the diagnosis of nosocomial infections in the discharge abstract were identified, and potential lost revenues were estimated. We identified 631 infections in 498 patients. On average, only 57 per cent of the infections were properly recorded and coded in the discharge abstract. Of the additional monies which might be anticipated by the health care institution to assist in the cost of care of adverse events, approximately one-third would have been lost due to errors in coding in the discharge abstract. Although these lost revenues are substantial, they constitute but a small proportion of the potential costs to the institution when patients acquire nosocomial infections. PMID:3105338
Identifying medication error chains from critical incident reports: a new analytic approach.
Huckels-Baumgart, Saskia; Manser, Tanja
2014-10-01
Research into the distribution of medication errors usually focuses on isolated stages within the medication use process. Our study aimed to provide a novel process-oriented approach to medication incident analysis focusing on medication error chains. Our study was conducted across a 900-bed teaching hospital in Switzerland. All reported 1,591 medication errors 2009-2012 were categorized using the Medication Error Index NCC MERP and the WHO Classification for Patient Safety Methodology. In order to identify medication error chains, each reported medication incident was allocated to the relevant stage of the hospital medication use process. Only 25.8% of the reported medication errors were detected before they propagated through the medication use process. The majority of medication errors (74.2%) formed an error chain encompassing two or more stages. The most frequent error chain comprised preparation up to and including medication administration (45.2%). "Non-consideration of documentation/prescribing" during the drug preparation was the most frequent contributor for "wrong dose" during the administration of medication. Medication error chains provide important insights for detecting and stopping medication errors before they reach the patient. Existing and new safety barriers need to be extended to interrupt error chains and to improve patient safety. © 2014, The American College of Clinical Pharmacology.
NASA Technical Reports Server (NTRS)
Alexander, Tiffaney Miller
2017-01-01
Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Safety within space exploration ground processing operations, the identification and/or classification of underlying contributors and causes of human error must be identified, in order to manage human error. This research provides a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.
NASA Technical Reports Server (NTRS)
Alexander, Tiffaney Miller
2017-01-01
Research results have shown that more than half of aviation, aerospace and aeronautics mishaps/incidents are attributed to human error. As a part of Safety within space exploration ground processing operations, the identification and/or classification of underlying contributors and causes of human error must be identified, in order to manage human error. This research provides a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.
NASA Technical Reports Server (NTRS)
Alexander, Tiffaney Miller
2017-01-01
Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Quality within space exploration ground processing operations, the identification and or classification of underlying contributors and causes of human error must be identified, in order to manage human error.This presentation will provide a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.
Kodak, Tiffany; Campbell, Vincent; Bergmann, Samantha; LeBlanc, Brittany; Kurtz-Nelson, Eva; Cariveau, Tom; Haq, Shaji; Zemantic, Patricia; Mahon, Jacob
2016-09-01
Prior research shows that learners have idiosyncratic responses to error-correction procedures during instruction. Thus, assessments that identify error-correction strategies to include in instruction can aid practitioners in selecting individualized, efficacious, and efficient interventions. The current investigation conducted an assessment to compare 5 error-correction procedures that have been evaluated in the extant literature and are common in instructional practice for children with autism spectrum disorder (ASD). Results showed that the assessment identified efficacious and efficient error-correction procedures for all participants, and 1 procedure was efficient for 4 of the 5 participants. To examine the social validity of error-correction procedures, participants selected among efficacious and efficient interventions in a concurrent-chains assessment. We discuss the results in relation to prior research on error-correction procedures and current instructional practices for learners with ASD. © 2016 Society for the Experimental Analysis of Behavior.
Purification of Logic-Qubit Entanglement.
Zhou, Lan; Sheng, Yu-Bo
2016-07-05
Recently, the logic-qubit entanglement shows its potential application in future quantum communication and quantum network. However, the entanglement will suffer from the noise and decoherence. In this paper, we will investigate the first entanglement purification protocol for logic-qubit entanglement. We show that both the bit-flip error and phase-flip error in logic-qubit entanglement can be well purified. Moreover, the bit-flip error in physical-qubit entanglement can be completely corrected. The phase-flip in physical-qubit entanglement error equals to the bit-flip error in logic-qubit entanglement, which can also be purified. This entanglement purification protocol may provide some potential applications in future quantum communication and quantum network.
NASA Technical Reports Server (NTRS)
Fragola, Joseph R.; Maggio, Gaspare; Frank, Michael V.; Gerez, Luis; Mcfadden, Richard H.; Collins, Erin P.; Ballesio, Jorge; Appignani, Peter L.; Karns, James J.
1995-01-01
The application of the probabilistic risk assessment methodology to a Space Shuttle environment, particularly to the potential of losing the Shuttle during nominal operation is addressed. The different related concerns are identified and combined to determine overall program risks. A fault tree model is used to allocate system probabilities to the subsystem level. The loss of the vehicle due to failure to contain energetic gas and debris, to maintain proper propulsion and configuration is analyzed, along with the loss due to Orbiter, external tank failure, and landing failure or error.
Identifying high-risk medication: a systematic literature review.
Saedder, Eva A; Brock, Birgitte; Nielsen, Lars Peter; Bonnerup, Dorthe K; Lisby, Marianne
2014-06-01
A medication error (ME) is an error that causes damage or poses a threat of harm to a patient. Several studies have shown that only a minority of MEs actually causes harm, and this might explain why medication reviews at hospital admission reduce the number of MEs without showing an effect on length of hospital stay, readmissions, or death. The purpose of this study was to define drugs that actually cause serious MEs. We conducted a literature search of medication reviews and other preventive efforts. A systematic search in PubMed, Embase, Cochrane Reviews, Psycinfo, and SweMed+ was performed. Danish databases containing published patient complaints, patient compensation, and reported medication errors were also searched. Articles and case reports were included if they contained information of an ME causing a serious adverse reaction (AR) in a patient. Information concerning AR seriousness, causality, and preventability was required for inclusion. This systematic literature review revealed that 47 % of all serious MEs were caused by seven drugs or drug classes: methotrexate, warfarin, nonsteroidal anti-inflammatory drugs (NSAIDS), digoxin, opioids, acetylic salicylic acid, and beta-blockers; 30 drugs or drug classes caused 82 % of all serious MEs. The top ten drugs involved in fatal events accounted for 73 % of all drugs identified. Increasing focus on seven drugs/drug classes can potentially reduce hospitalizations, extended hospitalizations, disability, life-threatening conditions, and death by almost 50 %.
Using technology to prevent adverse drug events in the intensive care unit.
Hassan, Erkan; Badawi, Omar; Weber, Robert J; Cohen, Henry
2010-06-01
Critically ill patients are particularly susceptible to adverse drug events (ADEs) due to their rapidly changing and unstable physiology, complex therapeutic regimens, and large percentage of medications administered intravenously. There are a wide variety of technologies that can help prevent the points of failure commonly associated with ADEs (i.e., the five "Rights": right patient; right drug; right route; right dose; right frequency). These technologies are often categorized by their degree of complexity to design and engineer and the type of error they are designed to prevent. Focusing solely on the software and hardware design of technology may over- or underestimate the degree of difficulty to avoid ADEs at the bedside. Alternatively, we propose categorizing technological solutions by identifying the factors essential for success. The two major critical success factors are: 1) the degree of clinical assessment required by the clinician to appropriately evaluate and disposition the issue identified by a technology; and 2) the complexity associated with effective implementation. This classification provides a way of determining how ADE-preventing technologies in the intensive care unit can be successfully integrated into clinical practice. Although there are limited data on the effectiveness of many technologies in reducing ADEs, we will review the technologies currently available in the intensive care unit environment. We will also discuss critical success factors for implementation, common errors made during implementation, and the potential errors using these systems.
Harland, Karisa K; Carney, Cher; McGehee, Daniel
2016-07-03
The objective of this study was to estimate the prevalence and odds of fleet driver errors and potentially distracting behaviors just prior to rear-end versus angle crashes. Analysis of naturalistic driving videos among fleet services drivers for errors and potentially distracting behaviors occurring in the 6 s before crash impact. Categorical variables were examined using the Pearson's chi-square test, and continuous variables, such as eyes-off-road time, were compared using the Student's t-test. Multivariable logistic regression was used to estimate the odds of a driver error or potentially distracting behavior being present in the seconds before rear-end versus angle crashes. Of the 229 crashes analyzed, 101 (44%) were rear-end and 128 (56%) were angle crashes. Driver age, gender, and presence of passengers did not differ significantly by crash type. Over 95% of rear-end crashes involved inadequate surveillance compared to only 52% of angle crashes (P < .0001). Almost 65% of rear-end crashes involved a potentially distracting driver behavior, whereas less than 40% of angle crashes involved these behaviors (P < .01). On average, drivers spent 4.4 s with their eyes off the road while operating or manipulating their cell phone. Drivers in rear-end crashes were at 3.06 (95% confidence interval [CI], 1.73-5.44) times adjusted higher odds of being potentially distracted than those in angle crashes. Fleet driver driving errors and potentially distracting behaviors are frequent. This analysis provides data to inform safe driving interventions for fleet services drivers. Further research is needed in effective interventions to reduce the likelihood of drivers' distracting behaviors and errors that may potentially reducing crashes.
ERIC Educational Resources Information Center
Au, Kathryn H.
The oral reading errors of 15 second graders were analyzed to find out if strategies used by good and poor readers could be differentiated. Patterns of errors were identified, and it was found that good readers often used context cues, while poor readers relied heavily on visual-phonic information. It was also possible to identify good and poor…
Zhu, Ling-Ling; Lv, Na; Zhou, Quan
2016-12-01
We read, with great interest, the study by Baldwin and Rodriguez (2016), which described the role of the verification nurse and details the verification process in identifying errors related to chemotherapy orders. We strongly agree with their findings that a verification nurse, collaborating closely with the prescribing physician, pharmacist, and treating nurse, can better identify errors and maintain safety during chemotherapy administration.
Determinants of Wealth Fluctuation: Changes in Hard-To-Measure Economic Variables in a Panel Study
Pfeffer, Fabian T.; Griffin, Jamie
2017-01-01
Measuring fluctuation in families’ economic conditions is the raison d’être of household panel studies. Accordingly, a particularly challenging critique is that extreme fluctuation in measured economic characteristics might indicate compounding measurement error rather than actual changes in families’ economic wellbeing. In this article, we address this claim by moving beyond the assumption that particularly large fluctuation in economic conditions might be too large to be realistic. Instead, we examine predictors of large fluctuation, capturing sources related to actual socio-economic changes as well as potential sources of measurement error. Using the Panel Study of Income Dynamics, we study between-wave changes in a dimension of economic wellbeing that is especially hard to measure, namely, net worth as an indicator of total family wealth. Our results demonstrate that even very large between-wave changes in net worth can be attributed to actual socio-economic and demographic processes. We do, however, also identify a potential source of measurement error that contributes to large wealth fluctuation, namely, the treatment of incomplete information, presenting a pervasive challenge for any longitudinal survey that includes questions on economic assets. Our results point to ways for improving wealth variables both in the data collection process (e.g., by measuring active savings) and in data processing (e.g., by improving imputation algorithms). PMID:28316752
Programmable Infusion Pumps in ICUs: An Analysis of Corresponding Adverse Drug Events
Bower, Anthony G.; Paddock, Susan M.; Hilborne, Lee H.; Wallace, Peggy; Rothschild, Jeffrey M.; Griffin, Anne; Fairbanks, Rollin J.; Carlson, Beverly; Panzer, Robert J.; Brook, Robert H.
2007-01-01
Background Patients in intensive care units (ICUs) frequently experience adverse drug events involving intravenous medications (IV-ADEs), which are often preventable. Objectives To determine how frequently preventable IV-ADEs in ICUs match the safety features of a programmable infusion pump with safety software (“smart pump”) and to suggest potential improvements in smart-pump design. Design Using retrospective medical-record review, we examined preventable IV-ADEs in ICUs before and after 2 hospitals replaced conventional pumps with smart pumps. The smart pumps alerted users when programmed to deliver duplicate infusions or continuous-infusion doses outside hospital-defined ranges. Participants 4,604 critically ill adults at 1 academic and 1 nonacademic hospital. Measurements Preventable IV-ADEs matching smart-pump features and errors involved in preventable IV-ADEs. Results Of 100 preventable IV-ADEs identified, 4 involved errors matching smart-pump features. Two occurred before and 2 after smart-pump implementation. Overall, 29% of preventable IV-ADEs involved overdoses; 37%, failures to monitor for potential problems; and 45%, failures to intervene when problems appeared. Error descriptions suggested that expanding smart pumps’ capabilities might enable them to prevent more IV-ADEs. Conclusion The smart pumps we evaluated are unlikely to reduce preventable IV-ADEs in ICUs because they address only 4% of them. Expanding smart-pump capabilities might prevent more IV-ADEs. PMID:18095043
Aston, Elizabeth; Channon, Alastair; Day, Charles; Knight, Christopher G.
2013-01-01
Understanding the effect of population size on the key parameters of evolution is particularly important for populations nearing extinction. There are evolutionary pressures to evolve sequences that are both fit and robust. At high mutation rates, individuals with greater mutational robustness can outcompete those with higher fitness. This is survival-of-the-flattest, and has been observed in digital organisms, theoretically, in simulated RNA evolution, and in RNA viruses. We introduce an algorithmic method capable of determining the relationship between population size, the critical mutation rate at which individuals with greater robustness to mutation are favoured over individuals with greater fitness, and the error threshold. Verification for this method is provided against analytical models for the error threshold. We show that the critical mutation rate for increasing haploid population sizes can be approximated by an exponential function, with much lower mutation rates tolerated by small populations. This is in contrast to previous studies which identified that critical mutation rate was independent of population size. The algorithm is extended to diploid populations in a system modelled on the biological process of meiosis. The results confirm that the relationship remains exponential, but show that both the critical mutation rate and error threshold are lower for diploids, rather than higher as might have been expected. Analyzing the transition from critical mutation rate to error threshold provides an improved definition of critical mutation rate. Natural populations with their numbers in decline can be expected to lose genetic material in line with the exponential model, accelerating and potentially irreversibly advancing their decline, and this could potentially affect extinction, recovery and population management strategy. The effect of population size is particularly strong in small populations with 100 individuals or less; the exponential model has significant potential in aiding population management to prevent local (and global) extinction events. PMID:24386200
DOE Office of Scientific and Technical Information (OSTI.GOV)
Novak, A; Nyflot, M; Sponseller, P
2014-06-01
Purpose: Radiation treatment planning involves a complex workflow that can make safety improvement efforts challenging. This study utilizes an incident reporting system to identify detection points of near-miss errors, in order to guide our departmental safety improvement efforts. Previous studies have examined where errors arise, but not where they are detected or their patterns. Methods: 1377 incidents were analyzed from a departmental nearmiss error reporting system from 3/2012–10/2013. All incidents were prospectively reviewed weekly by a multi-disciplinary team, and assigned a near-miss severity score ranging from 0–4 reflecting potential harm (no harm to critical). A 98-step consensus workflow was usedmore » to determine origination and detection points of near-miss errors, categorized into 7 major steps (patient assessment/orders, simulation, contouring/treatment planning, pre-treatment plan checks, therapist/on-treatment review, post-treatment checks, and equipment issues). Categories were compared using ANOVA. Results: In the 7-step workflow, 23% of near-miss errors were detected within the same step in the workflow, while an additional 37% were detected by the next step in the workflow, and 23% were detected two steps downstream. Errors detected further from origination were more severe (p<.001; Figure 1). The most common source of near-miss errors was treatment planning/contouring, with 476 near misses (35%). Of those 476, only 72(15%) were found before leaving treatment planning, 213(45%) were found at physics plan checks, and 191(40%) were caught at the therapist pre-treatment chart review or on portal imaging. Errors that passed through physics plan checks and were detected by therapists were more severe than other errors originating in contouring/treatment planning (1.81 vs 1.33, p<0.001). Conclusion: Errors caught by radiation treatment therapists tend to be more severe than errors caught earlier in the workflow, highlighting the importance of safety checks in dosimetry and physics. We are utilizing our findings to improve manual and automated checklists for dosimetry and physics.« less
Background• Differing degrees of exposure error acrosspollutants• Previous focus on quantifying and accounting forexposure error in single-pollutant models• Examine exposure errors for multiple pollutantsand provide insights on the potential for bias andattenuation...
Spüler, Martin; Rosenstiel, Wolfgang; Bogdan, Martin
2012-01-01
The goal of a Brain-Computer Interface (BCI) is to control a computer by pure brain activity. Recently, BCIs based on code-modulated visual evoked potentials (c-VEPs) have shown great potential to establish high-performance communication. In this paper we present a c-VEP BCI that uses online adaptation of the classifier to reduce calibration time and increase performance. We compare two different approaches for online adaptation of the system: an unsupervised method and a method that uses the detection of error-related potentials. Both approaches were tested in an online study, in which an average accuracy of 96% was achieved with adaptation based on error-related potentials. This accuracy corresponds to an average information transfer rate of 144 bit/min, which is the highest bitrate reported so far for a non-invasive BCI. In a free-spelling mode, the subjects were able to write with an average of 21.3 error-free letters per minute, which shows the feasibility of the BCI system in a normal-use scenario. In addition we show that a calibration of the BCI system solely based on the detection of error-related potentials is possible, without knowing the true class labels.
Object-based image analysis for cadastral mapping using satellite images
NASA Astrophysics Data System (ADS)
Kohli, D.; Crommelinck, S.; Bennett, R.; Koeva, M.; Lemmen, C.
2017-10-01
Cadasters together with land registry form a core ingredient of any land administration system. Cadastral maps comprise of the extent, ownership and value of land which are essential for recording and updating land records. Traditional methods for cadastral surveying and mapping often prove to be labor, cost and time intensive: alternative approaches are thus being researched for creating such maps. With the advent of very high resolution (VHR) imagery, satellite remote sensing offers a tremendous opportunity for (semi)-automation of cadastral boundaries detection. In this paper, we explore the potential of object-based image analysis (OBIA) approach for this purpose by applying two segmentation methods, i.e. MRS (multi-resolution segmentation) and ESP (estimation of scale parameter) to identify visible cadastral boundaries. Results show that a balance between high percentage of completeness and correctness is hard to achieve: a low error of commission often comes with a high error of omission. However, we conclude that the resulting segments/land use polygons can potentially be used as a base for further aggregation into tenure polygons using participatory mapping.
2013-01-01
Background Multi-site health sciences research is becoming more common, as it enables investigation of rare outcomes and diseases and new healthcare innovations. Multi-site research usually involves the transfer of large amounts of research data between collaborators, which increases the potential for accidental disclosures of protected health information (PHI). Standard protocols for preventing release of PHI are extremely vulnerable to human error, particularly when the shared data sets are large. Methods To address this problem, we developed an automated program (SAS macro) to identify possible PHI in research data before it is transferred between research sites. The macro reviews all data in a designated directory to identify suspicious variable names and data patterns. The macro looks for variables that may contain personal identifiers such as medical record numbers and social security numbers. In addition, the macro identifies dates and numbers that may identify people who belong to small groups, who may be identifiable even in the absences of traditional identifiers. Results Evaluation of the macro on 100 sample research data sets indicated a recall of 0.98 and precision of 0.81. Conclusions When implemented consistently, the macro has the potential to streamline the PHI review process and significantly reduce accidental PHI disclosures. PMID:23521861
Accuracy of measurement in electrically evoked compound action potentials.
Hey, Matthias; Müller-Deile, Joachim
2015-01-15
Electrically evoked compound action potentials (ECAP) in cochlear implant (CI) patients are characterized by the amplitude of the N1P1 complex. The measurement of evoked potentials yields a combination of the measured signal with various noise components but for ECAP procedures performed in the clinical routine, only the averaged curve is accessible. To date no detailed analysis of error dimension has been published. The aim of this study was to determine the error of the N1P1 amplitude and to determine the factors that impact the outcome. Measurements were performed on 32 CI patients with either CI24RE (CA) or CI512 implants using the Software Custom Sound EP (Cochlear). N1P1 error approximation of non-averaged raw data consisting of recorded single-sweeps was compared to methods of error approximation based on mean curves. The error approximation of the N1P1 amplitude using averaged data showed comparable results to single-point error estimation. The error of the N1P1 amplitude depends on the number of averaging steps and amplification; in contrast, the error of the N1P1 amplitude is not dependent on the stimulus intensity. Single-point error showed smaller N1P1 error and better coincidence with 1/√(N) function (N is the number of measured sweeps) compared to the known maximum-minimum criterion. Evaluation of N1P1 amplitude should be accompanied by indication of its error. The retrospective approximation of this measurement error from the averaged data available in clinically used software is possible and best done utilizing the D-trace in forward masking artefact reduction mode (no stimulation applied and recording contains only the switch-on-artefact). Copyright © 2014 Elsevier B.V. All rights reserved.
Development and validation of Aviation Causal Contributors for Error Reporting Systems (ACCERS).
Baker, David P; Krokos, Kelley J
2007-04-01
This investigation sought to develop a reliable and valid classification system for identifying and classifying the underlying causes of pilot errors reported under the Aviation Safety Action Program (ASAP). ASAP is a voluntary safety program that air carriers may establish to study pilot and crew performance on the line. In ASAP programs, similar to the Aviation Safety Reporting System, pilots self-report incidents by filing a short text description of the event. The identification of contributors to errors is critical if organizations are to improve human performance, yet it is difficult for analysts to extract this information from text narratives. A taxonomy was needed that could be used by pilots to classify the causes of errors. After completing a thorough literature review, pilot interviews and a card-sorting task were conducted in Studies 1 and 2 to develop the initial structure of the Aviation Causal Contributors for Event Reporting Systems (ACCERS) taxonomy. The reliability and utility of ACCERS was then tested in studies 3a and 3b by having pilots independently classify the primary and secondary causes of ASAP reports. The results provided initial evidence for the internal and external validity of ACCERS. Pilots were found to demonstrate adequate levels of agreement with respect to their category classifications. ACCERS appears to be a useful system for studying human error captured under pilot ASAP reports. Future work should focus on how ACCERS is organized and whether it can be used or modified to classify human error in ASAP programs for other aviation-related job categories such as dispatchers. Potential applications of this research include systems in which individuals self-report errors and that attempt to extract and classify the causes of those events.
Moreira, Maria E; Hernandez, Caleb; Stevens, Allen D; Jones, Seth; Sande, Margaret; Blumen, Jason R; Hopkins, Emily; Bakes, Katherine; Haukoos, Jason S
2015-08-01
The Institute of Medicine has called on the US health care system to identify and reduce medical errors. Unfortunately, medication dosing errors remain commonplace and may result in potentially life-threatening outcomes, particularly for pediatric patients when dosing requires weight-based calculations. Novel medication delivery systems that may reduce dosing errors resonate with national health care priorities. Our goal was to evaluate novel, prefilled medication syringes labeled with color-coded volumes corresponding to the weight-based dosing of the Broselow Tape, compared with conventional medication administration, in simulated pediatric emergency department (ED) resuscitation scenarios. We performed a prospective, block-randomized, crossover study in which 10 emergency physician and nurse teams managed 2 simulated pediatric arrest scenarios in situ, using either prefilled, color-coded syringes (intervention) or conventional drug administration methods (control). The ED resuscitation room and the intravenous medication port were video recorded during the simulations. Data were extracted from video review by blinded, independent reviewers. Median time to delivery of all doses for the conventional and color-coded delivery groups was 47 seconds (95% confidence interval [CI] 40 to 53 seconds) and 19 seconds (95% CI 18 to 20 seconds), respectively (difference=27 seconds; 95% CI 21 to 33 seconds). With the conventional method, 118 doses were administered, with 20 critical dosing errors (17%); with the color-coded method, 123 doses were administered, with 0 critical dosing errors (difference=17%; 95% CI 4% to 30%). A novel color-coded, prefilled syringe decreased time to medication administration and significantly reduced critical dosing errors by emergency physician and nurse teams during simulated pediatric ED resuscitations. Copyright © 2015 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.
Self-test web-based pure-tone audiometry: validity evaluation and measurement error analysis.
Masalski, Marcin; Kręcicki, Tomasz
2013-04-12
Potential methods of application of self-administered Web-based pure-tone audiometry conducted at home on a PC with a sound card and ordinary headphones depend on the value of measurement error in such tests. The aim of this research was to determine the measurement error of the hearing threshold determined in the way described above and to identify and analyze factors influencing its value. The evaluation of the hearing threshold was made in three series: (1) tests on a clinical audiometer, (2) self-tests done on a specially calibrated computer under the supervision of an audiologist, and (3) self-tests conducted at home. The research was carried out on the group of 51 participants selected from patients of an audiology outpatient clinic. From the group of 51 patients examined in the first two series, the third series was self-administered at home by 37 subjects (73%). The average difference between the value of the hearing threshold determined in series 1 and in series 2 was -1.54dB with standard deviation of 7.88dB and a Pearson correlation coefficient of .90. Between the first and third series, these values were -1.35dB±10.66dB and .84, respectively. In series 3, the standard deviation was most influenced by the error connected with the procedure of hearing threshold identification (6.64dB), calibration error (6.19dB), and additionally at the frequency of 250Hz by frequency nonlinearity error (7.28dB). The obtained results confirm the possibility of applying Web-based pure-tone audiometry in screening tests. In the future, modifications of the method leading to the decrease in measurement error can broaden the scope of Web-based pure-tone audiometry application.
Chen, Xianlai; Fann, Yang C; McAuliffe, Matthew; Vismer, David
2017-01-01
Background As one of the several effective solutions for personal privacy protection, a global unique identifier (GUID) is linked with hash codes that are generated from combinations of personally identifiable information (PII) by a one-way hash algorithm. On the GUID server, no PII is permitted to be stored, and only GUID and hash codes are allowed. The quality of PII entry is critical to the GUID system. Objective The goal of our study was to explore a method of checking questionable entry of PII in this context without using or sending any portion of PII while registering a subject. Methods According to the principle of GUID system, all possible combination patterns of PII fields were analyzed and used to generate hash codes, which were stored on the GUID server. Based on the matching rules of the GUID system, an error-checking algorithm was developed using set theory to check PII entry errors. We selected 200,000 simulated individuals with randomly-planted errors to evaluate the proposed algorithm. These errors were placed in the required PII fields or optional PII fields. The performance of the proposed algorithm was also tested in the registering system of study subjects. Results There are 127,700 error-planted subjects, of which 114,464 (89.64%) can still be identified as the previous one and remaining 13,236 (10.36%, 13,236/127,700) are discriminated as new subjects. As expected, 100% of nonidentified subjects had errors within the required PII fields. The possibility that a subject is identified is related to the count and the type of incorrect PII field. For all identified subjects, their errors can be found by the proposed algorithm. The scope of questionable PII fields is also associated with the count and the type of the incorrect PII field. The best situation is to precisely find the exact incorrect PII fields, and the worst situation is to shrink the questionable scope only to a set of 13 PII fields. In the application, the proposed algorithm can give a hint of questionable PII entry and perform as an effective tool. Conclusions The GUID system has high error tolerance and may correctly identify and associate a subject even with few PII field errors. Correct data entry, especially required PII fields, is critical to avoiding false splits. In the context of one-way hash transformation, the questionable input of PII may be identified by applying set theory operators based on the hash codes. The count and the type of incorrect PII fields play an important role in identifying a subject and locating questionable PII fields. PMID:28213343
Chen, Xianlai; Fann, Yang C; McAuliffe, Matthew; Vismer, David; Yang, Rong
2017-02-17
As one of the several effective solutions for personal privacy protection, a global unique identifier (GUID) is linked with hash codes that are generated from combinations of personally identifiable information (PII) by a one-way hash algorithm. On the GUID server, no PII is permitted to be stored, and only GUID and hash codes are allowed. The quality of PII entry is critical to the GUID system. The goal of our study was to explore a method of checking questionable entry of PII in this context without using or sending any portion of PII while registering a subject. According to the principle of GUID system, all possible combination patterns of PII fields were analyzed and used to generate hash codes, which were stored on the GUID server. Based on the matching rules of the GUID system, an error-checking algorithm was developed using set theory to check PII entry errors. We selected 200,000 simulated individuals with randomly-planted errors to evaluate the proposed algorithm. These errors were placed in the required PII fields or optional PII fields. The performance of the proposed algorithm was also tested in the registering system of study subjects. There are 127,700 error-planted subjects, of which 114,464 (89.64%) can still be identified as the previous one and remaining 13,236 (10.36%, 13,236/127,700) are discriminated as new subjects. As expected, 100% of nonidentified subjects had errors within the required PII fields. The possibility that a subject is identified is related to the count and the type of incorrect PII field. For all identified subjects, their errors can be found by the proposed algorithm. The scope of questionable PII fields is also associated with the count and the type of the incorrect PII field. The best situation is to precisely find the exact incorrect PII fields, and the worst situation is to shrink the questionable scope only to a set of 13 PII fields. In the application, the proposed algorithm can give a hint of questionable PII entry and perform as an effective tool. The GUID system has high error tolerance and may correctly identify and associate a subject even with few PII field errors. Correct data entry, especially required PII fields, is critical to avoiding false splits. In the context of one-way hash transformation, the questionable input of PII may be identified by applying set theory operators based on the hash codes. The count and the type of incorrect PII fields play an important role in identifying a subject and locating questionable PII fields. ©Xianlai Chen, Yang C Fann, Matthew McAuliffe, David Vismer, Rong Yang. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 17.02.2017.
Reducing medication errors in critical care: a multimodal approach
Kruer, Rachel M; Jarrell, Andrew S; Latif, Asad
2014-01-01
The Institute of Medicine has reported that medication errors are the single most common type of error in health care, representing 19% of all adverse events, while accounting for over 7,000 deaths annually. The frequency of medication errors in adult intensive care units can be as high as 947 per 1,000 patient-days, with a median of 105.9 per 1,000 patient-days. The formulation of drugs is a potential contributor to medication errors. Challenges related to drug formulation are specific to the various routes of medication administration, though errors associated with medication appearance and labeling occur among all drug formulations and routes of administration. Addressing these multifaceted challenges requires a multimodal approach. Changes in technology, training, systems, and safety culture are all strategies to potentially reduce medication errors related to drug formulation in the intensive care unit. PMID:25210478
A system dynamics approach to analyze laboratory test errors.
Guo, Shijing; Roudsari, Abdul; Garcez, Artur d'Avila
2015-01-01
Although many researches have been carried out to analyze laboratory test errors during the last decade, it still lacks a systemic view of study, especially to trace errors during test process and evaluate potential interventions. This study implements system dynamics modeling into laboratory errors to trace the laboratory error flows and to simulate the system behaviors while changing internal variable values. The change of the variables may reflect a change in demand or a proposed intervention. A review of literature on laboratory test errors was given and provided as the main data source for the system dynamics model. Three "what if" scenarios were selected for testing the model. System behaviors were observed and compared under different scenarios over a period of time. The results suggest system dynamics modeling has potential effectiveness of helping to understand laboratory errors, observe model behaviours, and provide a risk-free simulation experiments for possible strategies.
Transient fault behavior in a microprocessor: A case study
NASA Technical Reports Server (NTRS)
Duba, Patrick
1989-01-01
An experimental analysis is described which studies the susceptibility of a microprocessor based jet engine controller to upsets caused by current and voltage transients. A design automation environment which allows the run time injection of transients and the tracing from their impact device to the pin level is described. The resulting error data are categorized by the charge levels of the injected transients by location and by their potential to cause logic upsets, latched errors, and pin errors. The results show a 3 picoCouloumb threshold, below which the transients have little impact. An Arithmetic and Logic Unit transient is most likely to result in logic upsets and pin errors (i.e., impact the external environment). The transients in the countdown unit are potentially serious since they can result in latched errors, thus causing latent faults. Suggestions to protect the processor against these errors, by incorporating internal error detection and transient suppression techniques, are also made.
Discrimination of plant-parasitic nematodes from complex soil communities using ecometagenetics.
Porazinska, Dorota L; Morgan, Matthew J; Gaspar, John M; Court, Leon N; Hardy, Christopher M; Hodda, Mike
2014-07-01
Many plant pathogens are microscopic, cryptic, and difficult to diagnose. The new approach of ecometagenetics, involving ultrasequencing, bioinformatics, and biostatistics, has the potential to improve diagnoses of plant pathogens such as nematodes from the complex mixtures found in many agricultural and biosecurity situations. We tested this approach on a gradient of complexity ranging from a few individuals from a few species of known nematode pathogens in a relatively defined substrate to a complex and poorly known suite of nematode pathogens in a complex forest soil, including its associated biota of unknown protists, fungi, and other microscopic eukaryotes. We added three known but contrasting species (Pratylenchus neglectus, the closely related P. thornei, and Heterodera avenae) to half the set of substrates, leaving the other half without them. We then tested whether all nematode pathogens-known and unknown, indigenous, and experimentally added-were detected consistently present or absent. We always detected the Pratylenchus spp. correctly and with the number of sequence reads proportional to the numbers added. However, a single cyst of H. avenae was only identified approximately half the time it was present. Other plant-parasitic nematodes and nematodes from other trophic groups were detected well but other eukaryotes were detected less consistently. DNA sampling errors or informatic errors or both were involved in misidentification of H. avenae; however, the proportions of each varied in the different bioinformatic pipelines and with different parameters used. To a large extent, false-positive and false-negative errors were complementary: pipelines and parameters with the highest false-positive rates had the lowest false-negative rates and vice versa. Sources of error identified included assumptions in the bioinformatic pipelines, slight differences in primer regions, the number of sequence reads regarded as the minimum threshold for inclusion in analysis, and inaccessible DNA in resistant life stages. Identification of the sources of error allows us to suggest ways to improve identification using ecometagenetics.
Farwell, Lawrence A.; Richardson, Drew C.; Richardson, Graham M.; Furedy, John J.
2014-01-01
A classification concealed information test (CIT) used the “brain fingerprinting” method of applying P300 event-related potential (ERP) in detecting information that is (1) acquired in real life and (2) unique to US Navy experts in military medicine. Military medicine experts and non-experts were asked to push buttons in response to three types of text stimuli. Targets contain known information relevant to military medicine, are identified to subjects as relevant, and require pushing one button. Subjects are told to push another button to all other stimuli. Probes contain concealed information relevant to military medicine, and are not identified to subjects. Irrelevants contain equally plausible, but incorrect/irrelevant information. Error rate was 0%. Median and mean statistical confidences for individual determinations were 99.9% with no indeterminates (results lacking sufficiently high statistical confidence to be classified). We compared error rate and statistical confidence for determinations of both information present and information absent produced by classification CIT (Is a probe ERP more similar to a target or to an irrelevant ERP?) vs. comparison CIT (Does a probe produce a larger ERP than an irrelevant?) using P300 plus the late negative component (LNP; together, P300-MERMER). Comparison CIT produced a significantly higher error rate (20%) and lower statistical confidences: mean 67%; information-absent mean was 28.9%, less than chance (50%). We compared analysis using P300 alone with the P300 + LNP. P300 alone produced the same 0% error rate but significantly lower statistical confidences. These findings add to the evidence that the brain fingerprinting methods as described here provide sufficient conditions to produce less than 1% error rate and greater than 95% median statistical confidence in a CIT on information obtained in the course of real life that is characteristic of individuals with specific training, expertise, or organizational affiliation. PMID:25565941
On Statistical Modeling of Sequencing Noise in High Depth Data to Assess Tumor Evolution
NASA Astrophysics Data System (ADS)
Rabadan, Raul; Bhanot, Gyan; Marsilio, Sonia; Chiorazzi, Nicholas; Pasqualucci, Laura; Khiabanian, Hossein
2018-07-01
One cause of cancer mortality is tumor evolution to therapy-resistant disease. First line therapy often targets the dominant clone, and drug resistance can emerge from preexisting clones that gain fitness through therapy-induced natural selection. Such mutations may be identified using targeted sequencing assays by analysis of noise in high-depth data. Here, we develop a comprehensive, unbiased model for sequencing error background. We find that noise in sufficiently deep DNA sequencing data can be approximated by aggregating negative binomial distributions. Mutations with frequencies above noise may have prognostic value. We evaluate our model with simulated exponentially expanded populations as well as data from cell line and patient sample dilution experiments, demonstrating its utility in prognosticating tumor progression. Our results may have the potential to identify significant mutations that can cause recurrence. These results are relevant in the pretreatment clinical setting to determine appropriate therapy and prepare for potential recurrence pretreatment.
A new simplex chemometric approach to identify olive oil blends with potentially high traceability.
Semmar, N; Laroussi-Mezghani, S; Grati-Kamoun, N; Hammami, M; Artaud, J
2016-10-01
Olive oil blends (OOBs) are complex matrices combining different cultivars at variable proportions. Although qualitative determinations of OOBs have been subjected to several chemometric works, quantitative evaluations of their contents remain poorly developed because of traceability difficulties concerning co-occurring cultivars. Around this question, we recently published an original simplex approach helping to develop predictive models of the proportions of co-occurring cultivars from chemical profiles of resulting blends (Semmar & Artaud, 2015). Beyond predictive model construction and validation, this paper presents an extension based on prediction errors' analysis to statistically define the blends with the highest predictability among all the possible ones that can be made by mixing cultivars at different proportions. This provides an interesting way to identify a priori labeled commercial products with potentially high traceability taking into account the natural chemical variability of different constitutive cultivars. Copyright © 2016 Elsevier Ltd. All rights reserved.
On Statistical Modeling of Sequencing Noise in High Depth Data to Assess Tumor Evolution
NASA Astrophysics Data System (ADS)
Rabadan, Raul; Bhanot, Gyan; Marsilio, Sonia; Chiorazzi, Nicholas; Pasqualucci, Laura; Khiabanian, Hossein
2017-12-01
One cause of cancer mortality is tumor evolution to therapy-resistant disease. First line therapy often targets the dominant clone, and drug resistance can emerge from preexisting clones that gain fitness through therapy-induced natural selection. Such mutations may be identified using targeted sequencing assays by analysis of noise in high-depth data. Here, we develop a comprehensive, unbiased model for sequencing error background. We find that noise in sufficiently deep DNA sequencing data can be approximated by aggregating negative binomial distributions. Mutations with frequencies above noise may have prognostic value. We evaluate our model with simulated exponentially expanded populations as well as data from cell line and patient sample dilution experiments, demonstrating its utility in prognosticating tumor progression. Our results may have the potential to identify significant mutations that can cause recurrence. These results are relevant in the pretreatment clinical setting to determine appropriate therapy and prepare for potential recurrence pretreatment.
42 CFR 431.960 - Types of payment errors.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Estimating Improper Payments in Medicaid and CHIP § 431.960 Types of payment errors. (a) General rule. State or provider errors identified for the Medicaid and CHIP improper payments measurement under the... been paid by a third party but were inappropriately paid by Medicaid or CHIP. (v) Pricing errors. (vi...
42 CFR 431.960 - Types of payment errors.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Estimating Improper Payments in Medicaid and CHIP § 431.960 Types of payment errors. (a) General rule. State or provider errors identified for the Medicaid and CHIP improper payments measurement under the... been paid by a third party but were inappropriately paid by Medicaid or CHIP. (v) Pricing errors. (vi...
42 CFR 431.960 - Types of payment errors.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Estimating Improper Payments in Medicaid and CHIP § 431.960 Types of payment errors. (a) General rule. State or provider errors identified for the Medicaid and CHIP improper payments measurement under the... been paid by a third party but were inappropriately paid by Medicaid or CHIP. (v) Pricing errors. (vi...
42 CFR 431.960 - Types of payment errors.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Estimating Improper Payments in Medicaid and CHIP § 431.960 Types of payment errors. (a) General rule. State or provider errors identified for the Medicaid and CHIP improper payments measurement under the... been paid by a third party but were inappropriately paid by Medicaid or CHIP. (v) Pricing errors. (vi...
Purification of Logic-Qubit Entanglement
Zhou, Lan; Sheng, Yu-Bo
2016-01-01
Recently, the logic-qubit entanglement shows its potential application in future quantum communication and quantum network. However, the entanglement will suffer from the noise and decoherence. In this paper, we will investigate the first entanglement purification protocol for logic-qubit entanglement. We show that both the bit-flip error and phase-flip error in logic-qubit entanglement can be well purified. Moreover, the bit-flip error in physical-qubit entanglement can be completely corrected. The phase-flip in physical-qubit entanglement error equals to the bit-flip error in logic-qubit entanglement, which can also be purified. This entanglement purification protocol may provide some potential applications in future quantum communication and quantum network. PMID:27377165
High altitude cognitive performance and COPD interaction
Kourtidou-Papadeli, C; Papadelis, C; Koutsonikolas, D; Boutzioukas, S; Styliadis, C; Guiba-Tziampiri, O
2008-01-01
Introduction: Thousands of people work and perform everyday in high altitude environment, either as pilots, or shift workers, or mountaineers. The problem is that most of the accidents in this environment have been attributed to human error. The objective of this study was to assess complex cognitive performance as it interacts with respiratory insufficiency at altitudes of 8000 feet and identify the potential effect of hypoxia on safe performance. Methods: Twenty subjects participated in the study, divided in two groups: Group I with mild asymptomatic chronic obstructive pulmonary disease (COPD), and Group II with normal respiratory function. Altitude was simulated at 8000 ft. using gas mixtures. Results: Individuals with mild COPD experienced notable hypoxemia with significant performance decrements and increased number of errors at cabin altitude, compared to normal subjects, whereas their blood pressure significantly increased. PMID:19048098
Quantum Cryptography in Existing Telecommunications Infrastructure
NASA Astrophysics Data System (ADS)
Rogers, Daniel; Bienfang, Joshua; Mink, Alan; Hershman, Barry; Nakassis, Anastase; Tang, Xiao; Ma, Lijun; Su, David; Williams, Carl; Clark, Charles
2006-03-01
Quantum cryptography has shown the potential for ultra-secure communications. However, all systems demonstrated to date operate at speeds that make them impractical for performing continuous one-time-pad encryption of today's broadband communications. By adapting clock and data recovery techniques from modern telecommunications engineering practice, and by designing and implementing expeditious error correction and privacy amplification algorithms, we have demonstrated error-corrected and privacy-amplified key rates up to 1.0 Mbps over a free-space link with a 1.25 Gbps clock. Using new detectors with improved timing resolution, careful wavelength selection and an increased clock speed, we expect to quadruple the transmission rate over a 1.5 km free-space link. We have identified scalable solutions for delivering sustained one-time-pad encryption at 10 Mbps, thus making it possible to integrate quantum cryptography with first-generation Ethernet protocols.
Quantum Error Correction for Metrology
NASA Astrophysics Data System (ADS)
Sushkov, Alex; Kessler, Eric; Lovchinsky, Igor; Lukin, Mikhail
2014-05-01
The question of the best achievable sensitivity in a quantum measurement is of great experimental relevance, and has seen a lot of attention in recent years. Recent studies [e.g., Nat. Phys. 7, 406 (2011), Nat. Comms. 3, 1063 (2012)] suggest that in most generic scenarios any potential quantum gain (e.g. through the use of entangled states) vanishes in the presence of environmental noise. To overcome these limitations, we propose and analyze a new approach to improve quantum metrology based on quantum error correction (QEC). We identify the conditions under which QEC allows one to improve the signal-to-noise ratio in quantum-limited measurements, and we demonstrate that it enables, in certain situations, Heisenberg-limited sensitivity. We discuss specific applications to nanoscale sensing using nitrogen-vacancy centers in diamond in which QEC can significantly improve the measurement sensitivity and bandwidth under realistic experimental conditions.
Alterations in Error-Related Brain Activity and Post-Error Behavior over Time
ERIC Educational Resources Information Center
Themanson, Jason R.; Rosen, Peter J.; Pontifex, Matthew B.; Hillman, Charles H.; McAuley, Edward
2012-01-01
This study examines the relation between the error-related negativity (ERN) and post-error behavior over time in healthy young adults (N = 61). Event-related brain potentials were collected during two sessions of an identical flanker task. Results indicated changes in ERN and post-error accuracy were related across task sessions, with more…
A cognitive taxonomy of medical errors.
Zhang, Jiajie; Patel, Vimla L; Johnson, Todd R; Shortliffe, Edward H
2004-06-01
Propose a cognitive taxonomy of medical errors at the level of individuals and their interactions with technology. Use cognitive theories of human error and human action to develop the theoretical foundations of the taxonomy, develop the structure of the taxonomy, populate the taxonomy with examples of medical error cases, identify cognitive mechanisms for each category of medical error under the taxonomy, and apply the taxonomy to practical problems. Four criteria were used to evaluate the cognitive taxonomy. The taxonomy should be able (1) to categorize major types of errors at the individual level along cognitive dimensions, (2) to associate each type of error with a specific underlying cognitive mechanism, (3) to describe how and explain why a specific error occurs, and (4) to generate intervention strategies for each type of error. The proposed cognitive taxonomy largely satisfies the four criteria at a theoretical and conceptual level. Theoretically, the proposed cognitive taxonomy provides a method to systematically categorize medical errors at the individual level along cognitive dimensions, leads to a better understanding of the underlying cognitive mechanisms of medical errors, and provides a framework that can guide future studies on medical errors. Practically, it provides guidelines for the development of cognitive interventions to decrease medical errors and foundation for the development of medical error reporting system that not only categorizes errors but also identifies problems and helps to generate solutions. To validate this model empirically, we will next be performing systematic experimental studies.
Error simulation of paired-comparison-based scaling methods
NASA Astrophysics Data System (ADS)
Cui, Chengwu
2000-12-01
Subjective image quality measurement usually resorts to psycho physical scaling. However, it is difficult to evaluate the inherent precision of these scaling methods. Without knowing the potential errors of the measurement, subsequent use of the data can be misleading. In this paper, the errors on scaled values derived form paired comparison based scaling methods are simulated with randomly introduced proportion of choice errors that follow the binomial distribution. Simulation results are given for various combinations of the number of stimuli and the sampling size. The errors are presented in the form of average standard deviation of the scaled values and can be fitted reasonably well with an empirical equation that can be sued for scaling error estimation and measurement design. The simulation proves paired comparison based scaling methods can have large errors on the derived scaled values when the sampling size and the number of stimuli are small. Examples are also given to show the potential errors on actually scaled values of color image prints as measured by the method of paired comparison.
Huang, Xinchuan; Schwenke, David W; Lee, Timothy J
2011-01-28
In this work, we build upon our previous work on the theoretical spectroscopy of ammonia, NH(3). Compared to our 2008 study, we include more physics in our rovibrational calculations and more experimental data in the refinement procedure, and these enable us to produce a potential energy surface (PES) of unprecedented accuracy. We call this the HSL-2 PES. The additional physics we include is a second-order correction for the breakdown of the Born-Oppenheimer approximation, and we find it to be critical for improved results. By including experimental data for higher rotational levels in the refinement procedure, we were able to greatly reduce our systematic errors for the rotational dependence of our predictions. These additions together lead to a significantly improved total angular momentum (J) dependence in our computed rovibrational energies. The root-mean-square error between our predictions using the HSL-2 PES and the reliable energy levels from the HITRAN database for J = 0-6 and J = 7∕8 for (14)NH(3) is only 0.015 cm(-1) and 0.020∕0.023 cm(-1), respectively. The root-mean-square errors for the characteristic inversion splittings are approximately 1∕3 smaller than those for energy levels. The root-mean-square error for the 6002 J = 0-8 transition energies is 0.020 cm(-1). Overall, for J = 0-8, the spectroscopic data computed with HSL-2 is roughly an order of magnitude more accurate relative to our previous best ammonia PES (denoted HSL-1). These impressive numbers are eclipsed only by the root-mean-square error between our predictions for purely rotational transition energies of (15)NH(3) and the highly accurate Cologne database (CDMS): 0.00034 cm(-1) (10 MHz), in other words, 2 orders of magnitude smaller. In addition, we identify a deficiency in the (15)NH(3) energy levels determined from a model of the experimental data.
O'Connor, Annette M; Totton, Sarah C; Cullen, Jonah N; Ramezani, Mahmood; Kalivarapu, Vijay; Yuan, Chaohui; Gilbert, Stephen B
2018-01-01
Systematic reviews are increasingly using data from preclinical animal experiments in evidence networks. Further, there are ever-increasing efforts to automate aspects of the systematic review process. When assessing systematic bias and unit-of-analysis errors in preclinical experiments, it is critical to understand the study design elements employed by investigators. Such information can also inform prioritization of automation efforts that allow the identification of the most common issues. The aim of this study was to identify the design elements used by investigators in preclinical research in order to inform unique aspects of assessment of bias and error in preclinical research. Using 100 preclinical experiments each related to brain trauma and toxicology, we assessed design elements described by the investigators. We evaluated Methods and Materials sections of reports for descriptions of the following design elements: 1) use of comparison group, 2) unit of allocation of the interventions to study units, 3) arrangement of factors, 4) method of factor allocation to study units, 5) concealment of the factors during allocation and outcome assessment, 6) independence of study units, and 7) nature of factors. Many investigators reported using design elements that suggested the potential for unit-of-analysis errors, i.e., descriptions of repeated measurements of the outcome (94/200) and descriptions of potential for pseudo-replication (99/200). Use of complex factor arrangements was common, with 112 experiments using some form of factorial design (complete, incomplete or split-plot-like). In the toxicology dataset, 20 of the 100 experiments appeared to use a split-plot-like design, although no investigators used this term. The common use of repeated measures and factorial designs means understanding bias and error in preclinical experimental design might require greater expertise than simple parallel designs. Similarly, use of complex factor arrangements creates novel challenges for accurate automation of data extraction and bias and error assessment in preclinical experiments.
Schoenberg, Mike R; Osborn, Katie E; Mahone, E Mark; Feigon, Maia; Roth, Robert M; Pliskin, Neil H
2017-11-08
Errors in communication are a leading cause of medical errors. A potential source of error in communicating neuropsychological results is confusion in the qualitative descriptors used to describe standardized neuropsychological data. This study sought to evaluate the extent to which medical consumers of neuropsychological assessments believed that results/findings were not clearly communicated. In addition, preference data for a variety of qualitative descriptors commonly used to communicate normative neuropsychological test scores were obtained. Preference data were obtained for five qualitative descriptor systems as part of a larger 36-item internet-based survey of physician satisfaction with neuropsychological services. A new qualitative descriptor system termed the Simplified Qualitative Classification System (Q-Simple) was proposed to reduce the potential for communication errors using seven terms: very superior, superior, high average, average, low average, borderline, and abnormal/impaired. A non-random convenience sample of 605 clinicians identified from four United States academic medical centers from January 1, 2015 through January 7, 2016 were invited to participate. A total of 182 surveys were completed. A minority of clinicians (12.5%) indicated that neuropsychological study results were not clearly communicated. When communicating neuropsychological standardized scores, the two most preferred qualitative descriptor systems were by Heaton and colleagues (26%) and a newly proposed Q-simple system (22%). Comprehensive norms for an extended Halstead-Reitan battery: Demographic corrections, research findings, and clinical applications. Odessa, TX: Psychological Assessment Resources) (26%) and the newly proposed Q-Simple system (22%). Initial findings highlight the need to improve and standardize communication of neuropsychological results. These data offer initial guidance for preferred terms to communicate test results and form a foundation for more standardized practice among neuropsychologists. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Nazemi, S Majid; Amini, Morteza; Kontulainen, Saija A; Milner, Jaques S; Holdsworth, David W; Masri, Bassam A; Wilson, David R; Johnston, James D
2017-01-01
Quantitative computed tomography based subject-specific finite element modeling has potential to clarify the role of subchondral bone alterations in knee osteoarthritis initiation, progression, and pain. However, it is unclear what density-modulus equation(s) should be applied with subchondral cortical and subchondral trabecular bone when constructing finite element models of the tibia. Using a novel approach applying neural networks, optimization, and back-calculation against in situ experimental testing results, the objective of this study was to identify subchondral-specific equations that optimized finite element predictions of local structural stiffness at the proximal tibial subchondral surface. Thirteen proximal tibial compartments were imaged via quantitative computed tomography. Imaged bone mineral density was converted to elastic moduli using multiple density-modulus equations (93 total variations) then mapped to corresponding finite element models. For each variation, root mean squared error was calculated between finite element prediction and in situ measured stiffness at 47 indentation sites. Resulting errors were used to train an artificial neural network, which provided an unlimited number of model variations, with corresponding error, for predicting stiffness at the subchondral bone surface. Nelder-Mead optimization was used to identify optimum density-modulus equations for predicting stiffness. Finite element modeling predicted 81% of experimental stiffness variance (with 10.5% error) using optimized equations for subchondral cortical and trabecular bone differentiated with a 0.5g/cm 3 density. In comparison with published density-modulus relationships, optimized equations offered improved predictions of local subchondral structural stiffness. Further research is needed with anisotropy inclusion, a smaller voxel size and de-blurring algorithms to improve predictions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Franklin, Bryony Dean; Reynolds, Matthew; Sadler, Stacey; Hibberd, Ralph; Avery, Anthony J; Armstrong, Sarah J; Mehta, Rajnikant; Boyd, Matthew J; Barber, Nick
2014-01-01
Objectives To compare prevalence and types of dispensing errors and pharmacists’ labelling enhancements, for prescriptions transmitted electronically versus paper prescriptions. Design Naturalistic stepped wedge study. Setting 15 English community pharmacies. Intervention Electronic transmission of prescriptions between prescriber and pharmacy. Main outcome measures Prevalence of labelling errors, content errors and labelling enhancements (beneficial additions to the instructions), as identified by researchers visiting each pharmacy. Results Overall, we identified labelling errors in 5.4% of 16 357 dispensed items, and content errors in 1.4%; enhancements were made for 13.6%. Pharmacists also edited the label for a further 21.9% of electronically transmitted items. Electronically transmitted prescriptions had a higher prevalence of labelling errors (7.4% of 3733 items) than other prescriptions (4.8% of 12 624); OR 1.46 (95% CI 1.21 to 1.76). There was no difference for content errors or enhancements. The increase in labelling errors was mainly accounted for by errors (mainly at one pharmacy) involving omission of the indication, where specified by the prescriber, from the label. A sensitivity analysis in which these cases (n=158) were not considered errors revealed no remaining difference between prescription types. Conclusions We identified a higher prevalence of labelling errors for items transmitted electronically, but this was predominantly accounted for by local practice in a single pharmacy, independent of prescription type. Community pharmacists made labelling enhancements to about one in seven dispensed items, whether electronically transmitted or not. Community pharmacists, prescribers, professional bodies and software providers should work together to agree how items should be dispensed and labelled to best reap the benefits of electronically transmitted prescriptions. Community pharmacists need to ensure their computer systems are promptly updated to help reduce errors. PMID:24742778
Franklin, Bryony Dean; Reynolds, Matthew; Sadler, Stacey; Hibberd, Ralph; Avery, Anthony J; Armstrong, Sarah J; Mehta, Rajnikant; Boyd, Matthew J; Barber, Nick
2014-08-01
To compare prevalence and types of dispensing errors and pharmacists' labelling enhancements, for prescriptions transmitted electronically versus paper prescriptions. Naturalistic stepped wedge study. 15 English community pharmacies. Electronic transmission of prescriptions between prescriber and pharmacy. Prevalence of labelling errors, content errors and labelling enhancements (beneficial additions to the instructions), as identified by researchers visiting each pharmacy. Overall, we identified labelling errors in 5.4% of 16,357 dispensed items, and content errors in 1.4%; enhancements were made for 13.6%. Pharmacists also edited the label for a further 21.9% of electronically transmitted items. Electronically transmitted prescriptions had a higher prevalence of labelling errors (7.4% of 3733 items) than other prescriptions (4.8% of 12,624); OR 1.46 (95% CI 1.21 to 1.76). There was no difference for content errors or enhancements. The increase in labelling errors was mainly accounted for by errors (mainly at one pharmacy) involving omission of the indication, where specified by the prescriber, from the label. A sensitivity analysis in which these cases (n=158) were not considered errors revealed no remaining difference between prescription types. We identified a higher prevalence of labelling errors for items transmitted electronically, but this was predominantly accounted for by local practice in a single pharmacy, independent of prescription type. Community pharmacists made labelling enhancements to about one in seven dispensed items, whether electronically transmitted or not. Community pharmacists, prescribers, professional bodies and software providers should work together to agree how items should be dispensed and labelled to best reap the benefits of electronically transmitted prescriptions. Community pharmacists need to ensure their computer systems are promptly updated to help reduce errors. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Extraction of CT dose information from DICOM metadata: automated Matlab-based approach.
Dave, Jaydev K; Gingold, Eric L
2013-01-01
The purpose of this study was to extract exposure parameters and dose-relevant indexes of CT examinations from information embedded in DICOM metadata. DICOM dose report files were identified and retrieved from a PACS. An automated software program was used to extract from these files information from the structured elements in the DICOM metadata relevant to exposure. Extracting information from DICOM metadata eliminated potential errors inherent in techniques based on optical character recognition, yielding 100% accuracy.
2017-01-01
Unique Molecular Identifiers (UMIs) are random oligonucleotide barcodes that are increasingly used in high-throughput sequencing experiments. Through a UMI, identical copies arising from distinct molecules can be distinguished from those arising through PCR amplification of the same molecule. However, bioinformatic methods to leverage the information from UMIs have yet to be formalized. In particular, sequencing errors in the UMI sequence are often ignored or else resolved in an ad hoc manner. We show that errors in the UMI sequence are common and introduce network-based methods to account for these errors when identifying PCR duplicates. Using these methods, we demonstrate improved quantification accuracy both under simulated conditions and real iCLIP and single-cell RNA-seq data sets. Reproducibility between iCLIP replicates and single-cell RNA-seq clustering are both improved using our proposed network-based method, demonstrating the value of properly accounting for errors in UMIs. These methods are implemented in the open source UMI-tools software package. PMID:28100584
Improving Patient Safety With Error Identification in Chemotherapy Orders by Verification Nurses.
Baldwin, Abigail; Rodriguez, Elizabeth S
2016-02-01
The prevalence of medication errors associated with chemotherapy administration is not precisely known. Little evidence exists concerning the extent or nature of errors; however, some evidence demonstrates that errors are related to prescribing. This article demonstrates how the review of chemotherapy orders by a designated nurse known as a verification nurse (VN) at a National Cancer Institute-designated comprehensive cancer center helps to identify prescribing errors that may prevent chemotherapy administration mistakes and improve patient safety in outpatient infusion units. This article will describe the role of the VN and details of the verification process. To identify benefits of the VN role, a retrospective review and analysis of chemotherapy near-miss events from 2009-2014 was performed. A total of 4,282 events related to chemotherapy were entered into the Reporting to Improve Safety and Quality system. A majority of the events were categorized as near-miss events, or those that, because of chance, did not result in patient injury, and were identified at the point of prescribing.
Sahay, Ashlyn; Hutchinson, Marie; East, Leah
2015-05-01
Despite the growing awareness of the benefits of positive workplace climates, unsupportive and disruptive workplace behaviours are widespread in health care organisations. Recent graduate nurses, who are often new to a workplace, are particularly vulnerable in unsupportive climates, and are also recognised to be at higher risk for medication errors. Investigate the association between workplace supports and relationships and safe medication practice among graduate nurses. Exploratory study using quantitative survey with a convenience sample of 58 nursing graduates in two Australian States. Online survey focused on graduates' self-reported medication errors, safe medication practice and the nature of workplace supports and relationships. Spearman's correlations identified that unsupportive workplace relationships were inversely related to graduate nurse medication errors and erosion of safe medication practices, while supportive Nurse Unit Manager and supportive work team relationships positively influenced safe medication practice among graduates. Workplace supports and relationships are potentially both the cause and solution to graduate nurse medication errors and safe medication practices. The findings develop further understanding about the impact of unsupportive and disruptive behaviours on patient safety and draw attention to the importance of undergraduate and continuing education strategies that promote positive workplace behaviours and graduate resilience. Copyright © 2015 Elsevier Ltd. All rights reserved.
Optical storage media data integrity studies
NASA Technical Reports Server (NTRS)
Podio, Fernando L.
1994-01-01
Optical disk-based information systems are being used in private industry and many Federal Government agencies for on-line and long-term storage of large quantities of data. The storage devices that are part of these systems are designed with powerful, but not unlimited, media error correction capacities. The integrity of data stored on optical disks does not only depend on the life expectancy specifications for the medium. Different factors, including handling and storage conditions, may result in an increase of medium errors in size and frequency. Monitoring the potential data degradation is crucial, especially for long term applications. Efforts are being made by the Association for Information and Image Management Technical Committee C21, Storage Devices and Applications, to specify methods for monitoring and reporting to the user medium errors detected by the storage device while writing, reading or verifying the data stored in that medium. The Computer Systems Laboratory (CSL) of the National Institute of Standard and Technology (NIST) has a leadership role in the development of these standard techniques. In addition, CSL is researching other data integrity issues, including the investigation of error-resilient compression algorithms. NIST has conducted care and handling experiments on optical disk media with the objective of identifying possible causes of degradation. NIST work in data integrity and related standards activities is described.
Auditing the Assignments of Top-Level Semantic Types in the UMLS Semantic Network to UMLS Concepts
He, Zhe; Perl, Yehoshua; Elhanan, Gai; Chen, Yan; Geller, James; Bian, Jiang
2018-01-01
The Unified Medical Language System (UMLS) is an important terminological system. By the policy of its curators, each concept of the UMLS should be assigned the most specific Semantic Types (STs) in the UMLS Semantic Network (SN). Hence, the Semantic Types of most UMLS concepts are assigned at or near the bottom (leaves) of the UMLS Semantic Network. While most ST assignments are correct, some errors do occur. Therefore, Quality Assurance efforts of UMLS curators for ST assignments should concentrate on automatically detected sets of UMLS concepts with higher error rates than random sets. In this paper, we investigate the assignments of top-level semantic types in the UMLS semantic network to concepts, identify potential erroneous assignments, define four categories of errors, and thus provide assistance to curators of the UMLS to avoid these assignments errors. Human experts analyzed samples of concepts assigned 10 of the top-level semantic types and categorized the erroneous ST assignments into these four logical categories. Two thirds of the concepts assigned these 10 top-level semantic types are erroneous. Our results demonstrate that reviewing top-level semantic type assignments to concepts provides an effective way for UMLS quality assurance, comparing to reviewing a random selection of semantic type assignments. PMID:29375930
NASA Astrophysics Data System (ADS)
Kakkos, I.; Gkiatis, K.; Bromis, K.; Asvestas, P. A.; Karanasiou, I. S.; Ventouras, E. M.; Matsopoulos, G. K.
2017-11-01
The detection of an error is the cognitive evaluation of an action outcome that is considered undesired or mismatches an expected response. Brain activity during monitoring of correct and incorrect responses elicits Event Related Potentials (ERPs) revealing complex cerebral responses to deviant sensory stimuli. Development of accurate error detection systems is of great importance both concerning practical applications and in investigating the complex neural mechanisms of decision making. In this study, data are used from an audio identification experiment that was implemented with two levels of complexity in order to investigate neurophysiological error processing mechanisms in actors and observers. To examine and analyse the variations of the processing of erroneous sensory information for each level of complexity we employ Support Vector Machines (SVM) classifiers with various learning methods and kernels using characteristic ERP time-windowed features. For dimensionality reduction and to remove redundant features we implement a feature selection framework based on Sequential Forward Selection (SFS). The proposed method provided high accuracy in identifying correct and incorrect responses both for actors and for observers with mean accuracy of 93% and 91% respectively. Additionally, computational time was reduced and the effects of the nesting problem usually occurring in SFS of large feature sets were alleviated.
Auditing the Assignments of Top-Level Semantic Types in the UMLS Semantic Network to UMLS Concepts.
He, Zhe; Perl, Yehoshua; Elhanan, Gai; Chen, Yan; Geller, James; Bian, Jiang
2017-11-01
The Unified Medical Language System (UMLS) is an important terminological system. By the policy of its curators, each concept of the UMLS should be assigned the most specific Semantic Types (STs) in the UMLS Semantic Network (SN). Hence, the Semantic Types of most UMLS concepts are assigned at or near the bottom (leaves) of the UMLS Semantic Network. While most ST assignments are correct, some errors do occur. Therefore, Quality Assurance efforts of UMLS curators for ST assignments should concentrate on automatically detected sets of UMLS concepts with higher error rates than random sets. In this paper, we investigate the assignments of top-level semantic types in the UMLS semantic network to concepts, identify potential erroneous assignments, define four categories of errors, and thus provide assistance to curators of the UMLS to avoid these assignments errors. Human experts analyzed samples of concepts assigned 10 of the top-level semantic types and categorized the erroneous ST assignments into these four logical categories. Two thirds of the concepts assigned these 10 top-level semantic types are erroneous. Our results demonstrate that reviewing top-level semantic type assignments to concepts provides an effective way for UMLS quality assurance, comparing to reviewing a random selection of semantic type assignments.
Abnormal Error Monitoring in Math-Anxious Individuals: Evidence from Error-Related Brain Potentials
Suárez-Pellicioni, Macarena; Núñez-Peña, María Isabel; Colomé, Àngels
2013-01-01
This study used event-related brain potentials to investigate whether math anxiety is related to abnormal error monitoring processing. Seventeen high math-anxious (HMA) and seventeen low math-anxious (LMA) individuals were presented with a numerical and a classical Stroop task. Groups did not differ in terms of trait or state anxiety. We found enhanced error-related negativity (ERN) in the HMA group when subjects committed an error on the numerical Stroop task, but not on the classical Stroop task. Groups did not differ in terms of the correct-related negativity component (CRN), the error positivity component (Pe), classical behavioral measures or post-error measures. The amplitude of the ERN was negatively related to participants’ math anxiety scores, showing a more negative amplitude as the score increased. Moreover, using standardized low resolution electromagnetic tomography (sLORETA) we found greater activation of the insula in errors on a numerical task as compared to errors in a non-numerical task only for the HMA group. The results were interpreted according to the motivational significance theory of the ERN. PMID:24236212
Madani, Amin; Watanabe, Yusuke; Feldman, Liane S; Vassiliou, Melina C; Barkun, Jeffrey S; Fried, Gerald M; Aggarwal, Rajesh
2015-11-01
Bile duct injuries from laparoscopic cholecystectomy remain a significant source of morbidity and are often the result of intraoperative errors in perception, judgment, and decision-making. This qualitative study aimed to define and characterize higher-order cognitive competencies required to safely perform a laparoscopic cholecystectomy. Hierarchical and cognitive task analyses for establishing a critical view of safety during laparoscopic cholecystectomy were performed using qualitative methods to map the thoughts and practices that characterize expert performance. Experts with more than 5 years of experience, and who have performed at least 100 laparoscopic cholecystectomies, participated in semi-structured interviews and field observations. Verbal data were transcribed verbatim, supplemented with content from published literature, coded, thematically analyzed using grounded-theory by 2 independent reviewers, and synthesized into a list of items. A conceptual framework was created based on 10 interviews with experts, 9 procedures, and 18 literary sources. Experts included 6 minimally invasive surgeons, 2 hepato-pancreatico-biliary surgeons, and 2 acute care general surgeons (median years in practice, 11 [range 8 to 14]). One hundred eight cognitive elements (35 [32%] related to situation awareness, 47 [44%] involving decision-making, and 26 [24%] action-oriented subtasks) and 75 potential errors were identified and categorized into 6 general themes and 14 procedural tasks. Of the 75 potential errors, root causes were mapped to errors in situation awareness (24 [32%]), decision-making (49 [65%]), or either one (61 [81%]). This study defines the competencies that are essential to establishing a critical view of safety and avoiding bile duct injuries during laparoscopic cholecystectomy. This framework may serve as the basis for instructional design, assessment tools, and quality-control metrics to prevent injuries and promote a culture of patient safety. Copyright © 2015 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
de Wet, C; Bowie, P
2009-04-01
A multi-method strategy has been proposed to understand and improve the safety of primary care. The trigger tool is a relatively new method that has shown promise in American and secondary healthcare settings. It involves the focused review of a random sample of patient records using a series of "triggers" that alert reviewers to potential errors and previously undetected adverse events. To develop and test a global trigger tool to detect errors and adverse events in primary-care records. Trigger tool development was informed by previous research and content validated by expert opinion. The tool was applied by trained reviewers who worked in pairs to conduct focused audits of 100 randomly selected electronic patient records in each of five urban general practices in central Scotland. Review of 500 records revealed 2251 consultations and 730 triggers. An adverse event was found in 47 records (9.4%), indicating that harm occurred at a rate of one event per 48 consultations. Of these, 27 were judged to be preventable (42%). A further 17 records (3.4%) contained evidence of a potential adverse event. Harm severity was low to moderate for most patients (82.9%). Error and harm rates were higher in those aged > or =60 years, and most were medication-related (59%). The trigger tool was successful in identifying undetected patient harm in primary-care records and may be the most reliable method for achieving this. However, the feasibility of its routine application is open to question. The tool may have greater utility as a research rather than an audit technique. Further testing in larger, representative study samples is required.
Competition between learned reward and error outcome predictions in anterior cingulate cortex.
Alexander, William H; Brown, Joshua W
2010-02-15
The anterior cingulate cortex (ACC) is implicated in performance monitoring and cognitive control. Non-human primate studies of ACC show prominent reward signals, but these are elusive in human studies, which instead show mainly conflict and error effects. Here we demonstrate distinct appetitive and aversive activity in human ACC. The error likelihood hypothesis suggests that ACC activity increases in proportion to the likelihood of an error, and ACC is also sensitive to the consequence magnitude of the predicted error. Previous work further showed that error likelihood effects reach a ceiling as the potential consequences of an error increase, possibly due to reductions in the average reward. We explored this issue by independently manipulating reward magnitude of task responses and error likelihood while controlling for potential error consequences in an Incentive Change Signal Task. The fMRI results ruled out a modulatory effect of expected reward on error likelihood effects in favor of a competition effect between expected reward and error likelihood. Dynamic causal modeling showed that error likelihood and expected reward signals are intrinsic to the ACC rather than received from elsewhere. These findings agree with interpretations of ACC activity as signaling both perceptions of risk and predicted reward. Copyright 2009 Elsevier Inc. All rights reserved.
Application of screened Coulomb potential in fitting DBV star PG 0112+104
NASA Astrophysics Data System (ADS)
Chen, Y. H.
2018-03-01
With 78.7 d of observations for PG 0112+104, a pulsating DB star, from Campaign 8 of Kepler 2 mission, Hermes et al. made a detailed mode identification. A reliable mode identification, with 5 l = 1 modes, 3 l = 2 modes, and 3 l = 1 or 2 modes, was identified. Grids of DBV star models are evolved by WDEC with element diffusion effect of pure Coulomb potential and screened Coulomb potential. Fitting the identified modes of PG 0112+104 by the calculated ones, we studied the difference of element diffusion effect between adopting pure Coulomb potential and screened Coulomb potential. Our aim is to reduce the fitting error by studying new input physics. The starting models including their chemical composition profile are from white dwarf models evolved by MESA. They were calculated following the stellar evolution from the main sequence to the start of the white dwarf cooling sequences. The optimal parameters are basically consistent with that of previous spectroscopic and asteroseismological studies. The pure and screened Coulomb potential lead to different composition profiles of the C/O-He interface area. High k modes are very sensitive to the area. However, most of the observed modes for PG 0112+104 are low k modes. The σRMS taking the screened Coulomb potential is reduced by 4 per cent compared with taking the pure Coulomb potential when fitting the identified low k modes of PG 0112+104. Fitting the Kepler 2 data with our models improved the σRMS of the fit by 27 per cent.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Almasi, Gheorghe; Blumrich, Matthias Augustin; Chen, Dong
Methods and apparatus perform fault isolation in multiple node computing systems using commutative error detection values for--example, checksums--to identify and to isolate faulty nodes. When information associated with a reproducible portion of a computer program is injected into a network by a node, a commutative error detection value is calculated. At intervals, node fault detection apparatus associated with the multiple node computer system retrieve commutative error detection values associated with the node and stores them in memory. When the computer program is executed again by the multiple node computer system, new commutative error detection values are created and stored inmore » memory. The node fault detection apparatus identifies faulty nodes by comparing commutative error detection values associated with reproducible portions of the application program generated by a particular node from different runs of the application program. Differences in values indicate a possible faulty node.« less
Error Analysis in Mathematics. Technical Report #1012
ERIC Educational Resources Information Center
Lai, Cheng-Fei
2012-01-01
Error analysis is a method commonly used to identify the cause of student errors when they make consistent mistakes. It is a process of reviewing a student's work and then looking for patterns of misunderstanding. Errors in mathematics can be factual, procedural, or conceptual, and may occur for a number of reasons. Reasons why students make…
Error Detection/Correction in Collaborative Writing
ERIC Educational Resources Information Center
Pilotti, Maura; Chodorow, Martin
2009-01-01
In the present study, we examined error detection/correction during collaborative writing. Subjects were asked to identify and correct errors in two contexts: a passage written by the subject (familiar text) and a passage written by a person other than the subject (unfamiliar text). A computer program inserted errors in function words prior to the…
Perceptual Bias in Speech Error Data Collection: Insights from Spanish Speech Errors
ERIC Educational Resources Information Center
Perez, Elvira; Santiago, Julio; Palma, Alfonso; O'Seaghdha, Padraig G.
2007-01-01
This paper studies the reliability and validity of naturalistic speech errors as a tool for language production research. Possible biases when collecting naturalistic speech errors are identified and specific predictions derived. These patterns are then contrasted with published reports from Germanic languages (English, German and Dutch) and one…
ERIC Educational Resources Information Center
Tulis, Maria; Steuer, Gabriele; Dresel, Markus
2018-01-01
Research on learning from errors gives reason to assume that errors provide a high potential to facilitate deep learning if students are willing and able to take these learning opportunities. The first aim of this study was to analyse whether beliefs about errors as learning opportunities can be theoretically and empirically distinguished from…
Lost in Translation: the Case for Integrated Testing
NASA Technical Reports Server (NTRS)
Young, Aaron
2017-01-01
The building of a spacecraft is complex and often involves multiple suppliers and companies that have their own designs and processes. Standards have been developed across the industries to reduce the chances for critical flight errors at the system level, but the spacecraft is still vulnerable to the introduction of critical errors during integration of these systems. Critical errors can occur at any time during the process and in many cases, human reliability analysis (HRA) identifies human error as a risk driver. Most programs have a test plan in place that is intended to catch these errors, but it is not uncommon for schedule and cost stress to result in less testing than initially planned. Therefore, integrated testing, or "testing as you fly," is essential as a final check on the design and assembly to catch any errors prior to the mission. This presentation will outline the unique benefits of integrated testing by catching critical flight errors that can otherwise go undetected, discuss HRA methods that are used to identify opportunities for human error, lessons learned and challenges over ownership of testing will be discussed.
Gorban, A N; Mirkes, E M; Zinovyev, A
2016-12-01
Most of machine learning approaches have stemmed from the application of minimizing the mean squared distance principle, based on the computationally efficient quadratic optimization methods. However, when faced with high-dimensional and noisy data, the quadratic error functionals demonstrated many weaknesses including high sensitivity to contaminating factors and dimensionality curse. Therefore, a lot of recent applications in machine learning exploited properties of non-quadratic error functionals based on L 1 norm or even sub-linear potentials corresponding to quasinorms L p (0
Error-Eliciting Problems: Fostering Understanding and Thinking
ERIC Educational Resources Information Center
Lim, Kien H.
2014-01-01
Student errors are springboards for analyzing, reasoning, and justifying. The mathematics education community recognizes the value of student errors, noting that "mistakes are seen not as dead ends but rather as potential avenues for learning." To induce specific errors and help students learn, choose tasks that might produce mistakes.…
The Lung Image Database Consortium (LIDC): Ensuring the integrity of expert-defined “truth”
Armato, Samuel G.; Roberts, Rachael Y.; McNitt-Gray, Michael F.; Meyer, Charles R.; Reeves, Anthony P.; McLennan, Geoffrey; Engelmann, Roger M.; Bland, Peyton H.; Aberle, Denise R.; Kazerooni, Ella A.; MacMahon, Heber; van Beek, Edwin J.R.; Yankelevitz, David; Croft, Barbara Y.; Clarke, Laurence P.
2007-01-01
Rationale and Objectives Computer-aided diagnostic (CAD) systems fundamentally require the opinions of expert human observers to establish “truth” for algorithm development, training, and testing. The integrity of this “truth,” however, must be established before investigators commit to this “gold standard” as the basis for their research. The purpose of this study was to develop a quality assurance (QA) model as an integral component of the “truth” collection process concerning the location and spatial extent of lung nodules observed on computed tomography (CT) scans to be included in the Lung Image Database Consortium (LIDC) public database. Materials and Methods One hundred CT scans were interpreted by four radiologists through a two-phase process. For the first of these reads (the “blinded read phase”), radiologists independently identified and annotated lesions, assigning each to one of three categories: “nodule ≥ 3mm,” “nodule < 3mm,” or “non-nodule ≥ 3mm.” For the second read (the “unblinded read phase”), the same radiologists independently evaluated the same CT scans but with all of the annotations from the previously performed blinded reads presented; each radiologist could add marks, edit or delete their own marks, change the lesion category of their own marks, or leave their marks unchanged. The post-unblinded-read set of marks was grouped into discrete nodules and subjected to the QA process, which consisted of (1) identification of potential errors introduced during the complete image annotation process (such as two marks on what appears to be a single lesion or an incomplete nodule contour) and (2) correction of those errors. Seven categories of potential error were defined; any nodule with a mark that satisfied the criterion for one of these categories was referred to the radiologist who assigned that mark for either correction or confirmation that the mark was intentional. Results A total of 105 QA issues were identified across 45 (45.0%) of the 100 CT scans. Radiologist review resulted in modifications to 101 (96.2%) of these potential errors. Twenty-one lesions erroneously marked as lung nodules after the unblinded reads had this designation removed through the QA process. Conclusion The establishment of “truth” must incorporate a QA process to guarantee the integrity of the datasets that will provide the basis for the development, training, and testing of CAD systems. PMID:18035275
Permanent-File-Validation Utility Computer Program
NASA Technical Reports Server (NTRS)
Derry, Stephen D.
1988-01-01
Errors in files detected and corrected during operation. Permanent File Validation (PFVAL) utility computer program provides CDC CYBER NOS sites with mechanism to verify integrity of permanent file base. Locates and identifies permanent file errors in Mass Storage Table (MST) and Track Reservation Table (TRT), in permanent file catalog entries (PFC's) in permit sectors, and in disk sector linkage. All detected errors written to listing file and system and job day files. Program operates by reading system tables , catalog track, permit sectors, and disk linkage bytes to vaidate expected and actual file linkages. Used extensively to identify and locate errors in permanent files and enable online correction, reducing computer-system downtime.
Systematic Error in Leaf Water Potential Measurements with a Thermocouple Psychrometer.
Rawlins, S L
1964-10-30
To allow for the error in measurement of water potentials in leaves, introduced by the presence of a water droplet in the chamber of the psychrometer, a correction must be made for the permeability of the leaf.
Olson, Stephen M; Hussaini, Mohammad; Lewis, James S
2011-05-01
Frozen section analysis is an essential tool for assessing margins intra-operatively to assure complete resection. Many institutions evaluate surgical defect edge tissue provided by the surgeon after the main lesion has been removed. With the increasing use of transoral laser microsurgery, this method is becoming even more prevalent. We sought to evaluate error rates at our large academic institution and to see if sampling errors could be reduced by the simple method change of taking an additional third section on these specimens. All head and neck tumor resection cases from January 2005 through August 2008 with margins evaluated by frozen section were identified by database search. These cases were analyzed by cutting two levels during frozen section and a third permanent section later. All resection cases from August 2008 through July 2009 were identified as well. These were analyzed by cutting three levels during frozen section (the third a 'much deeper' level) and a fourth permanent section later. Error rates for both of these periods were determined. Errors were separated into sampling and interpretation types. There were 4976 total frozen section specimens from 848 patients. The overall error rate was 2.4% for all frozen sections where just two levels were evaluated and was 2.5% when three levels were evaluated (P=0.67). The sampling error rate was 1.6% for two-level sectioning and 1.2% for three-level sectioning (P=0.42). However, when considering only the frozen section cases where tumor was ultimately identified (either at the time of frozen section or on permanent sections) the sampling error rate for two-level sectioning was 15.3 versus 7.4% for three-level sectioning. This difference was statistically significant (P=0.006). Cutting a single additional 'deeper' level at the time of frozen section identifies more tumor-bearing specimens and may reduce the number of sampling errors.
Kandel, Himal; Khadka, Jyoti; Goggin, Michael; Pesudovs, Konrad
2017-12-01
This review has identified the best existing patient-reported outcome (PRO) instruments in refractive error. The article highlights the limitations of the existing instruments and discusses the way forward. A systematic review was conducted to identify the types of PROs used in refractive error, to determine the quality of the existing PRO instruments in terms of their psychometric properties, and to determine the limitations in the content of the existing PRO instruments. Articles describing a PRO instrument measuring 1 or more domains of quality of life in people with refractive error were identified by electronic searches on the MEDLINE, PubMed, Scopus, Web of Science, and Cochrane databases. The information on content development, psychometric properties, validity, reliability, and responsiveness of those PRO instruments was extracted from the selected articles. The analysis was done based on a comprehensive set of assessment criteria. One hundred forty-eight articles describing 47 PRO instruments in refractive error were included in the review. Most of the articles (99 [66.9%]) used refractive error-specific PRO instruments. The PRO instruments comprised 19 refractive, 12 vision but nonrefractive, and 16 generic PRO instruments. Only 17 PRO instruments were validated in refractive error populations; six of them were developed using Rasch analysis. None of the PRO instruments has items across all domains of quality of life. The Quality of Life Impact of Refractive Correction, the Quality of Vision, and the Contact Lens Impact on Quality of Life have comparatively better quality with some limitations, compared with the other PRO instruments. This review describes the PRO instruments and informs the choice of an appropriate measure in refractive error. We identified need of a comprehensive and scientifically robust refractive error-specific PRO instrument. Item banking and computer-adaptive testing system can be the way to provide such an instrument.
Michaelson, M; Walsh, E; Bradley, C P; McCague, P; Owens, R; Sahm, L J
2017-08-01
Prescribing error may result in adverse clinical outcomes leading to increased patient morbidity, mortality and increased economic burden. Many errors occur during transitional care as patients move between different stages and settings of care. To conduct a review of medication information and identify prescribing error among an adult population in an urban hospital. Retrospective review of medication information was conducted. Part 1: an audit of discharge prescriptions which assessed: legibility, compliance with legal requirements, therapeutic errors (strength, dose and frequency) and drug interactions. Part 2: A review of all sources of medication information (namely pre-admission medication list, drug Kardex, discharge prescription, discharge letter) for 15 inpatients to identify unintentional prescription discrepancies, defined as: "undocumented and/or unjustified medication alteration" throughout the hospital stay. Part 1: of the 5910 prescribed items; 53 (0.9%) were deemed illegible. Of the controlled drug prescriptions 11.1% (n = 167) met all the legal requirements. Therapeutic errors occurred in 41% of prescriptions (n = 479) More than 1 in 5 patients (21.9%) received a prescription containing a drug interaction. Part 2: 175 discrepancies were identified across all sources of medication information; of which 78 were deemed unintentional. Of these: 10.2% (n = 8) occurred at the point of admission, whereby 76.9% (n = 60) occurred at the point of discharge. The study identified the time of discharge as a point at which prescribing errors are likely to occur. This has implications for patient safety and provider work load in both primary and secondary care.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ballhausen, Hendrik, E-mail: hendrik.ballhausen@med.uni-muenchen.de; Hieber, Sheila; Li, Minglun
2014-08-15
Purpose: To identify the relevant technical sources of error of a system based on three-dimensional ultrasound (3D US) for patient positioning in external beam radiotherapy. To quantify these sources of error in a controlled laboratory setting. To estimate the resulting end-to-end geometric precision of the intramodality protocol. Methods: Two identical free-hand 3D US systems at both the planning-CT and the treatment room were calibrated to the laboratory frame of reference. Every step of the calibration chain was repeated multiple times to estimate its contribution to overall systematic and random error. Optimal margins were computed given the identified and quantified systematicmore » and random errors. Results: In descending order of magnitude, the identified and quantified sources of error were: alignment of calibration phantom to laser marks 0.78 mm, alignment of lasers in treatment vs planning room 0.51 mm, calibration and tracking of 3D US probe 0.49 mm, alignment of stereoscopic infrared camera to calibration phantom 0.03 mm. Under ideal laboratory conditions, these errors are expected to limit ultrasound-based positioning to an accuracy of 1.05 mm radially. Conclusions: The investigated 3D ultrasound system achieves an intramodal accuracy of about 1 mm radially in a controlled laboratory setting. The identified systematic and random errors require an optimal clinical tumor volume to planning target volume margin of about 3 mm. These inherent technical limitations do not prevent clinical use, including hypofractionation or stereotactic body radiation therapy.« less
Decrease in medical command errors with use of a "standing orders" protocol system.
Holliman, C J; Wuerz, R C; Meador, S A
1994-05-01
The purpose of this study was to determine the physician medical command error rates and paramedic error rates after implementation of a "standing orders" protocol system for medical command. These patient-care error rates were compared with the previously reported rates for a "required call-in" medical command system (Ann Emerg Med 1992; 21(4):347-350). A secondary aim of the study was to determine if the on-scene time interval was increased by the standing orders system. Prospectively conducted audit of prehospital advanced life support (ALS) trip sheets was made at an urban ALS paramedic service with on-line physician medical command from three local hospitals. All ALS run sheets from the start time of the standing orders system (April 1, 1991) for a 1-year period ending on March 30, 1992 were reviewed as part of an ongoing quality assurance program. Cases were identified as nonjustifiably deviating from regional emergency medical services (EMS) protocols as judged by agreement of three physician reviewers (the same methodology as a previously reported command error study in the same ALS system). Medical command and paramedic errors were identified from the prehospital ALS run sheets and categorized. Two thousand one ALS runs were reviewed; 24 physician errors (1.2% of the 1,928 "command" runs) and eight paramedic errors (0.4% of runs) were identified. The physician error rate was decreased from the 2.6% rate in the previous study (P < .0001 by chi 2 analysis). The on-scene time interval did not increase with the "standing orders" system.(ABSTRACT TRUNCATED AT 250 WORDS)
Filtered Push: Annotating Distributed Data for Quality Control and Fitness for Use Analysis
NASA Astrophysics Data System (ADS)
Morris, P. J.; Kelly, M. A.; Lowery, D. B.; Macklin, J. A.; Morris, R. A.; Tremonte, D.; Wang, Z.
2009-12-01
The single greatest problem with the federation of scientific data is the assessment of the quality and validity of the aggregated data in the context of particular research problems, that is, its fitness for use. There are three critical data quality issues in networks of distributed natural science collections data, as in all scientific data: identifying and correcting errors, maintaining currency, and assessing fitness for use. To this end, we have designed and implemented a prototype network in the domain of natural science collections. This prototype is built over the open source Map-Reduce platform Hadoop with a network client in the open source collections management system Specify 6. We call this network “Filtered Push” as, at its core, annotations are pushed from the network edges to relevant authoritative repositories, where humans and software filter the annotations before accepting them as changes to the authoritative data. The Filtered Push software is a domain-neutral framework for originating, distributing, and analyzing record-level annotations. Network participants can subscribe to notifications arising from ontology-based analyses of new annotations or of purpose-built queries against the network's global history of annotations. Quality and fitness for use of distributed natural science collections data can be addressed with Filtered Push software by implementing a network that allows data providers and consumers to define potential errors in data, develop metrics for those errors, specify workflows to analyze distributed data to detect potential errors, and to close the quality management cycle by providing a network architecture to pushing assertions about data quality such as corrections back to the curators of the participating data sets. Quality issues in distributed scientific data have several things in common: (1) Statements about data quality should be regarded as hypotheses about inconsistencies between perhaps several records, data sets, or practices of science. (2) Data quality problems often cannot be detected only from internal statistical correlations or logical analysis, but may need the application of defined workflows that signal illogical output. (3) Changes in scientific theory or practice over time can result in changes of what QC tests should be applied to legacy data. (4) The frequency of some classes of error in a data set may be identifiable without the ability to assert that a particular record is in error. To address these issues requires, as does science itself, framing QC hypotheses against data that may be anywhere and may arise at any time in the future. In short, QC for science data is a never ending process. It must provide for notice to an agent (human or software) that a given dataset supports a hypothesis of inconsistency with a current scientific resource or model, or with potential generalizations of the concepts in a metadata ontology. Like quality control in general, quality control of distributed data is a repeated cyclical process. In implementing a Filtered Push network for quality control, we have a model in which the cost of QC forever is not substantially greater than QC once.
Intuitive theories of information: beliefs about the value of redundancy.
Soll, J B
1999-03-01
In many situations, quantity estimates from multiple experts or diagnostic instruments must be collected and combined. Normatively, and all else equal, one should value information sources that are nonredundant, in the sense that correlation in forecast errors should be minimized. Past research on the preference for redundancy has been inconclusive. While some studies have suggested that people correctly place higher value on uncorrelated inputs when collecting estimates, others have shown that people either ignore correlation or, in some cases, even prefer it. The present experiments show that the preference for redundancy depends on one's intuitive theory of information. The most common intuitive theory identified is the Error Tradeoff Model (ETM), which explicitly distinguishes between measurement error and bias. According to ETM, measurement error can only be averaged out by consulting the same source multiple times (normatively false), and bias can only be averaged out by consulting different sources (normatively true). As a result, ETM leads people to prefer redundant estimates when the ratio of measurement error to bias is relatively high. Other participants favored different theories. Some adopted the normative model, while others were reluctant to mathematically average estimates from different sources in any circumstance. In a post hoc analysis, science majors were more likely than others to subscribe to the normative model. While tentative, this result lends insight into how intuitive theories might develop and also has potential ramifications for how statistical concepts such as correlation might best be learned and internalized. Copyright 1999 Academic Press.
Accurate and predictive antibody repertoire profiling by molecular amplification fingerprinting.
Khan, Tarik A; Friedensohn, Simon; Gorter de Vries, Arthur R; Straszewski, Jakub; Ruscheweyh, Hans-Joachim; Reddy, Sai T
2016-03-01
High-throughput antibody repertoire sequencing (Ig-seq) provides quantitative molecular information on humoral immunity. However, Ig-seq is compromised by biases and errors introduced during library preparation and sequencing. By using synthetic antibody spike-in genes, we determined that primer bias from multiplex polymerase chain reaction (PCR) library preparation resulted in antibody frequencies with only 42 to 62% accuracy. Additionally, Ig-seq errors resulted in antibody diversity measurements being overestimated by up to 5000-fold. To rectify this, we developed molecular amplification fingerprinting (MAF), which uses unique molecular identifier (UID) tagging before and during multiplex PCR amplification, which enabled tagging of transcripts while accounting for PCR efficiency. Combined with a bioinformatic pipeline, MAF bias correction led to measurements of antibody frequencies with up to 99% accuracy. We also used MAF to correct PCR and sequencing errors, resulting in enhanced accuracy of full-length antibody diversity measurements, achieving 98 to 100% error correction. Using murine MAF-corrected data, we established a quantitative metric of recent clonal expansion-the intraclonal diversity index-which measures the number of unique transcripts associated with an antibody clone. We used this intraclonal diversity index along with antibody frequencies and somatic hypermutation to build a logistic regression model for prediction of the immunological status of clones. The model was able to predict clonal status with high confidence but only when using MAF error and bias corrected Ig-seq data. Improved accuracy by MAF provides the potential to greatly advance Ig-seq and its utility in immunology and biotechnology.
Accurate and predictive antibody repertoire profiling by molecular amplification fingerprinting
Khan, Tarik A.; Friedensohn, Simon; de Vries, Arthur R. Gorter; Straszewski, Jakub; Ruscheweyh, Hans-Joachim; Reddy, Sai T.
2016-01-01
High-throughput antibody repertoire sequencing (Ig-seq) provides quantitative molecular information on humoral immunity. However, Ig-seq is compromised by biases and errors introduced during library preparation and sequencing. By using synthetic antibody spike-in genes, we determined that primer bias from multiplex polymerase chain reaction (PCR) library preparation resulted in antibody frequencies with only 42 to 62% accuracy. Additionally, Ig-seq errors resulted in antibody diversity measurements being overestimated by up to 5000-fold. To rectify this, we developed molecular amplification fingerprinting (MAF), which uses unique molecular identifier (UID) tagging before and during multiplex PCR amplification, which enabled tagging of transcripts while accounting for PCR efficiency. Combined with a bioinformatic pipeline, MAF bias correction led to measurements of antibody frequencies with up to 99% accuracy. We also used MAF to correct PCR and sequencing errors, resulting in enhanced accuracy of full-length antibody diversity measurements, achieving 98 to 100% error correction. Using murine MAF-corrected data, we established a quantitative metric of recent clonal expansion—the intraclonal diversity index—which measures the number of unique transcripts associated with an antibody clone. We used this intraclonal diversity index along with antibody frequencies and somatic hypermutation to build a logistic regression model for prediction of the immunological status of clones. The model was able to predict clonal status with high confidence but only when using MAF error and bias corrected Ig-seq data. Improved accuracy by MAF provides the potential to greatly advance Ig-seq and its utility in immunology and biotechnology. PMID:26998518
Cooper, P David; Smart, David R
2017-03-01
In an era of ever-increasing medical costs, the identification and prohibition of ineffective medical therapies is of considerable economic interest to healthcare funding bodies. Likewise, the avoidance of interventions with an unduly elevated clinical risk/benefit ratio would be similarly advantageous for patients. Regrettably, the identification of such therapies has proven problematic. A recent paper from the Grattan Institute in Australia (identifying five hospital procedures as having the potential for disinvestment on these grounds) serves as a timely illustration of the difficulties inherent in non-clinicians attempting to accurately recognize such interventions using non-clinical, indirect or poorly validated datasets. To evaluate the Grattan Institute report and associated publications, and determine the validity of their assertions regarding hyperbaric oxygen treatment (HBOT) utilisation in Australia. Critical analysis of the HBOT metadata included in the Grattan Institute study was undertaken and compared against other publicly available Australian Government and independent data sources. The consistency, accuracy and reproducibility of data definitions and terminology across the various publications were appraised and the authors' methodology was reviewed. Reference sources were examined for relevance and temporal eligibility. Review of the Grattan publications demonstrated multiple problems, including (but not limited to): confusing patient-treatments with total patient numbers; incorrect identification of 'appropriate' vs. 'inappropriate' indications for HBOT; reliance upon a compromised primary dataset; lack of appropriate clinical input, muddled methodology and use of inapplicable references. These errors resulted in a more than seventy-fold over-estimation of the number of patients potentially treated inappropriately with HBOT in Australia that year. Numerous methodological flaws and factual errors have been identified in this Grattan Institute study. Its conclusions are not valid and a formal retraction is required.
Defining the Relationship Between Human Error Classes and Technology Intervention Strategies
NASA Technical Reports Server (NTRS)
Wiegmann, Douglas A.; Rantanen, Esa; Crisp, Vicki K. (Technical Monitor)
2002-01-01
One of the main factors in all aviation accidents is human error. The NASA Aviation Safety Program (AvSP), therefore, has identified several human-factors safety technologies to address this issue. Some technologies directly address human error either by attempting to reduce the occurrence of errors or by mitigating the negative consequences of errors. However, new technologies and system changes may also introduce new error opportunities or even induce different types of errors. Consequently, a thorough understanding of the relationship between error classes and technology "fixes" is crucial for the evaluation of intervention strategies outlined in the AvSP, so that resources can be effectively directed to maximize the benefit to flight safety. The purpose of the present project, therefore, was to examine the repositories of human factors data to identify the possible relationship between different error class and technology intervention strategies. The first phase of the project, which is summarized here, involved the development of prototype data structures or matrices that map errors onto "fixes" (and vice versa), with the hope of facilitating the development of standards for evaluating safety products. Possible follow-on phases of this project are also discussed. These additional efforts include a thorough and detailed review of the literature to fill in the data matrix and the construction of a complete database and standards checklists.
Henneman, Elizabeth A; Roche, Joan P; Fisher, Donald L; Cunningham, Helene; Reilly, Cheryl A; Nathanson, Brian H; Henneman, Philip L
2010-02-01
This study examined types of errors that occurred or were recovered in a simulated environment by student nurses. Errors occurred in all four rule-based error categories, and all students committed at least one error. The most frequent errors occurred in the verification category. Another common error was related to physician interactions. The least common errors were related to coordinating information with the patient and family. Our finding that 100% of student subjects committed rule-based errors is cause for concern. To decrease errors and improve safe clinical practice, nurse educators must identify effective strategies that students can use to improve patient surveillance. Copyright 2010 Elsevier Inc. All rights reserved.
Types of Possible Survey Errors in Estimates Published in the Weekly Natural Gas Storage Report
2016-01-01
This document lists types of potential errors in EIA estimates published in the WNGSR. Survey errors are an unavoidable aspect of data collection. Error is inherent in all collected data, regardless of the source of the data and the care and competence of data collectors. The type and extent of error depends on the type and characteristics of the survey.
SU-E-J-15: Automatically Detect Patient Treatment Position and Orientation in KV Portal Images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiu, J; Yang, D
2015-06-15
Purpose: In the course of radiation therapy, the complex information processing workflow will Result in potential errors, such as incorrect or inaccurate patient setups. With automatic image check and patient identification, such errors could be effectively reduced. For this purpose, we developed a simple and rapid image processing method, to automatically detect the patient position and orientation in 2D portal images, so to allow automatic check of positions and orientations for patient daily RT treatments. Methods: Based on the principle of portal image formation, a set of whole body DRR images were reconstructed from multiple whole body CT volume datasets,more » and fused together to be used as the matching template. To identify the patient setup position and orientation shown in a 2D portal image, the 2D portal image was preprocessed (contrast enhancement, down-sampling and couch table detection), then matched to the template image so to identify the laterality (left or right), position, orientation and treatment site. Results: Five day’s clinical qualified portal images were gathered randomly, then were processed by the automatic detection and matching method without any additional information. The detection results were visually checked by physicists. 182 images were correct detection in a total of 200kV portal images. The correct rate was 91%. Conclusion: The proposed method can detect patient setup and orientation quickly and automatically. It only requires the image intensity information in KV portal images. This method can be useful in the framework of Electronic Chart Check (ECCK) to reduce the potential errors in workflow of radiation therapy and so to improve patient safety. In addition, the auto-detection results, as the patient treatment site position and patient orientation, could be useful to guide the sequential image processing procedures, e.g. verification of patient daily setup accuracy. This work was partially supported by research grant from Varian Medical System.« less
Tridandapani, Srini; Ramamurthy, Senthil; Provenzale, James; Obuchowski, Nancy A; Evanoff, Michael G; Bhatti, Pamela
2014-08-01
To evaluate whether the presence of facial photographs obtained at the point-of-care of portable radiography leads to increased detection of wrong-patient errors. In this institutional review board-approved study, 166 radiograph-photograph combinations were obtained from 30 patients. Consecutive radiographs from the same patients resulted in 83 unique pairs (ie, a new radiograph and prior, comparison radiograph) for interpretation. To simulate wrong-patient errors, mismatched pairs were generated by pairing radiographs from different patients chosen randomly from the sample. Ninety radiologists each interpreted a unique randomly chosen set of 10 radiographic pairs, containing up to 10% mismatches (ie, error pairs). Radiologists were randomly assigned to interpret radiographs with or without photographs. The number of mismatches was identified, and interpretation times were recorded. Ninety radiologists with 21 ± 10 (mean ± standard deviation) years of experience were recruited to participate in this observer study. With the introduction of photographs, the proportion of errors detected increased from 31% (9 of 29) to 77% (23 of 30; P = .006). The odds ratio for detection of error with photographs to detection without photographs was 7.3 (95% confidence interval: 2.29-23.18). Observer qualifications, training, or practice in cardiothoracic radiology did not influence sensitivity for error detection. There is no significant difference in interpretation time for studies without photographs and those with photographs (60 ± 22 vs. 61 ± 25 seconds; P = .77). In this observer study, facial photographs obtained simultaneously with portable chest radiographs increased the identification of any wrong-patient errors, without substantial increase in interpretation time. This technique offers a potential means to increase patient safety through correct patient identification. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Pouplier, Marianne; Marin, Stefania; Waltl, Susanne
2014-01-01
Purpose: Phonetic accommodation in speech errors has traditionally been used to identify the processing level at which an error has occurred. Recent studies have challenged the view that noncanonical productions may solely be due to phonetic, not phonological, processing irregularities, as previously assumed. The authors of the present study…
Error Analysis in Composition of Iranian Lower Intermediate Students
ERIC Educational Resources Information Center
Taghavi, Mehdi
2012-01-01
Learners make errors during the process of learning languages. This study examines errors in writing task of twenty Iranian lower intermediate male students aged between 13 and 15. A subject was given to the participants was a composition about the seasons of a year. All of the errors were identified and classified. Corder's classification (1967)…
An Analysis of Computational Errors in the Use of Division Algorithms by Fourth-Grade Students.
ERIC Educational Resources Information Center
Stefanich, Greg P.; Rokusek, Teri
1992-01-01
Presents a study that analyzed errors made by randomly chosen fourth grade students (25 of 57) while using the division algorithm and investigated the effect of remediation on identified systematic errors. Results affirm that error pattern diagnosis and directed remediation lead to new learning and long-term retention. (MDH)
Classification of drugs with different risk profiles.
Saedder, Eva Aggerholm; Brock, Birgitte; Nielsen, Lars Peter; Bonnerup, Dorthe Krogsgaard; Lisby, Marianne
2015-08-01
A risk stratification approach is needed to identify patients at high risk of medication errors and a resulting high need of medication review. The aim of this study was to perform risk stratification (distinguishing between low-risk, medium-risk and high-risk drugs) for drugs found to cause serious adverse reactions due to medication errors. The study employed a modified Delphi technique. Drugs from a systematic literature search were included into two rounds of a Delphi process. A panel of experts was asked to evaluate each identified drug's potential for harm and for clinically relevant drug-drug interactions on a scale from 1 (low risk) to 9 (high risk). A total of 36 experts were appointed to serve on the panel. Consensus was reached for 29/57 (51%) drugs or drug classes that cause harm, and for 32/57 (56%) of the drugs or drug classes that cause interactions. For the remaining drugs, a decision was made based on the median score. Two lists, one stating the drugs' potential for causing harm and the other stating clinically relevant drug-drug interactions, were stratified into low-risk, medium-risk and high-risk drugs. Based on a modified Delphi technique, we created two lists of drugs stratified into a low-risk, a medium-risk and a high-risk group of clinically relevant interactions or risk of harm to patients. The lists could be incorporated into a risk-scoring tool that stratifies the performance of medication reviews according to patients' risk of experiencing adverse reactions. none. not relevant.
Tsui, Lok-kun; Benavidez, Angelica; Palanisamy, Ponnusamy; ...
2017-04-13
The development of on-board sensors for emissions monitoring is necessary for continuous monitoring of the performance of catalytic systems in automobiles. We have fabricated mixed potential electrochemical gas sensing devices with Pt, La 0.8Sr 0.2CrO 3 (LSCO), and Au/Pd alloy electrodes and a porous yttria-stabilized zirconia electrolyte. The three-electrode design takes advantage of the preferential selectivity of the Pt + Au/Pd and Pt + LSCO pairs towards different species of gases and has additional tunable selectivity achieved by applying a current bias to the latter pair. Voltages were recorded in single, binary, and ternary gas streams of NO, NO 2,more » C 3H 8, and CO. We have also trained artificial neural networks to examine the voltage output from sensors in biased and unbiased modes to both identify which single test gas or binary mixture of two test gases is present in a gas stream as well as extract concentration values. We were then able to identify single and binary mixtures of these gases with accuracy of at least 98%. For determining concentration, the peak in the error distribution for binary mixtures was 5% and 80% of test data fell under <12% error. The sensor stability was also evaluated over the course of over 100 days and the ability to retrain ANNs with a small dataset was demonstrated.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsui, Lok-kun; Benavidez, Angelica; Palanisamy, Ponnusamy
The development of on-board sensors for emissions monitoring is necessary for continuous monitoring of the performance of catalytic systems in automobiles. We have fabricated mixed potential electrochemical gas sensing devices with Pt, La 0.8Sr 0.2CrO 3 (LSCO), and Au/Pd alloy electrodes and a porous yttria-stabilized zirconia electrolyte. The three-electrode design takes advantage of the preferential selectivity of the Pt + Au/Pd and Pt + LSCO pairs towards different species of gases and has additional tunable selectivity achieved by applying a current bias to the latter pair. Voltages were recorded in single, binary, and ternary gas streams of NO, NO 2,more » C 3H 8, and CO. We have also trained artificial neural networks to examine the voltage output from sensors in biased and unbiased modes to both identify which single test gas or binary mixture of two test gases is present in a gas stream as well as extract concentration values. We were then able to identify single and binary mixtures of these gases with accuracy of at least 98%. For determining concentration, the peak in the error distribution for binary mixtures was 5% and 80% of test data fell under <12% error. The sensor stability was also evaluated over the course of over 100 days and the ability to retrain ANNs with a small dataset was demonstrated.« less
Event-related potentials for post-error and post-conflict slowing.
Chang, Andrew; Chen, Chien-Chung; Li, Hsin-Hung; Li, Chiang-Shan R
2014-01-01
In a reaction time task, people typically slow down following an error or conflict, each called post-error slowing (PES) and post-conflict slowing (PCS). Despite many studies of the cognitive mechanisms, the neural responses of PES and PCS continue to be debated. In this study, we combined high-density array EEG and a stop-signal task to examine event-related potentials of PES and PCS in sixteen young adult participants. The results showed that the amplitude of N2 is greater during PES but not PCS. In contrast, the peak latency of N2 is longer for PCS but not PES. Furthermore, error-positivity (Pe) but not error-related negativity (ERN) was greater in the stop error trials preceding PES than non-PES trials, suggesting that PES is related to participants' awareness of the error. Together, these findings extend earlier work of cognitive control by specifying the neural correlates of PES and PCS in the stop signal task.
Using meta-quality to assess the utility of volunteered geographic information for science.
Langley, Shaun A; Messina, Joseph P; Moore, Nathan
2017-11-06
Volunteered geographic information (VGI) has strong potential to be increasingly valuable to scientists in collaboration with non-scientists. The abundance of mobile phones and other wireless forms of communication open up significant opportunities for the public to get involved in scientific research. As these devices and activities become more abundant, questions of uncertainty and error in volunteer data are emerging as critical components for using volunteer-sourced spatial data. Here we present a methodology for using VGI and assessing its sensitivity to three types of error. More specifically, this study evaluates the reliability of data from volunteers based on their historical patterns. The specific context is a case study in surveillance of tsetse flies, a health concern for being the primary vector of African Trypanosomiasis. Reliability, as measured by a reputation score, determines the threshold for accepting the volunteered data for inclusion in a tsetse presence/absence model. Higher reputation scores are successful in identifying areas of higher modeled tsetse prevalence. A dynamic threshold is needed but the quality of VGI will improve as more data are collected and the errors in identifying reliable participants will decrease. This system allows for two-way communication between researchers and the public, and a way to evaluate the reliability of VGI. Boosting the public's ability to participate in such work can improve disease surveillance and promote citizen science. In the absence of active surveillance, VGI can provide valuable spatial information given that the data are reliable.
A Physician-based Voluntary Reporting System for Adverse Events and Medical Errors
Weingart, Saul N; Callanan, Lawrence D; Ship, Amy N; Aronson, Mark D
2001-01-01
OBJECTIVE To create a voluntary reporting method for identifying adverse events (AEs) and potential adverse events (PAEs) among medical inpatients. DESIGN Medical house officers asked their peers about obstacles to care, injuries or extended hospitalizations, and problems with medications that affected their patients. Two independent reviewers coded event narratives for adverse outcomes, responsible parties, preventability, and process problems. We corroborated house officers' reports with hospital incident reports and conducted a retrospective chart review. SETTING The cardiac step-down, oncology, and medical intensive care units of an urban teaching hospital. INTERVENTION Structured confidential interviews by postgraduate year-2 and -3 medical residents of interns during work rounds. MEASUREMENTS AND MAIN RESULTS Respondents reported 88 events over 3 months. AEs occurred among 5 patients (0.5% of admissions) and PAEs among 48 patients (4.9% of admissions). Delayed diagnoses and treatments figured prominently among PAEs (54%). Clinicians were responsible for the greatest number of incidents (55%), followed by workers in the laboratory (11%), radiology (15%), and pharmacy (3%). Respondents identified a variety of problematic processes of care, including problems with diagnosis (16%), therapy (26%), and failure to provide clinical and support services (29%). We corroborated 84% of reported events in the medical record. Participants found voluntary peer reporting of medical errors unobtrusive and agreed that it could be implemented on a regular basis. CONCLUSIONS A physician-based voluntary reporting system for medical errors is feasible and acceptable to front-line clinicians. PMID:11903759
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagayama, T.; Bailey, J. E.; Loisel, G. P.
Iron opacity calculations presently disagree with measurements at an electron temperature of ~180–195 eV and an electron density of (2–4)×10 22cm –3, conditions similar to those at the base of the solar convection zone. The measurements use x rays to volumetrically heat a thin iron sample that is tamped with low-Z materials. The opacity is inferred from spectrally resolved x-ray transmission measurements. Plasma self-emission, tamper attenuation, and temporal and spatial gradients can all potentially cause systematic errors in the measured opacity spectra. In this article we quantitatively evaluate these potential errors with numerical investigations. The analysis exploits computer simulations thatmore » were previously found to reproduce the experimentally measured plasma conditions. The simulations, combined with a spectral synthesis model, enable evaluations of individual and combined potential errors in order to estimate their potential effects on the opacity measurement. Lastly, the results show that the errors considered here do not account for the previously observed model-data discrepancies.« less
Nagayama, T.; Bailey, J. E.; Loisel, G. P.; ...
2017-06-26
Iron opacity calculations presently disagree with measurements at an electron temperature of ~180–195 eV and an electron density of (2–4)×10 22cm –3, conditions similar to those at the base of the solar convection zone. The measurements use x rays to volumetrically heat a thin iron sample that is tamped with low-Z materials. The opacity is inferred from spectrally resolved x-ray transmission measurements. Plasma self-emission, tamper attenuation, and temporal and spatial gradients can all potentially cause systematic errors in the measured opacity spectra. In this article we quantitatively evaluate these potential errors with numerical investigations. The analysis exploits computer simulations thatmore » were previously found to reproduce the experimentally measured plasma conditions. The simulations, combined with a spectral synthesis model, enable evaluations of individual and combined potential errors in order to estimate their potential effects on the opacity measurement. Lastly, the results show that the errors considered here do not account for the previously observed model-data discrepancies.« less
The Neural Basis of Error Detection: Conflict Monitoring and the Error-Related Negativity
ERIC Educational Resources Information Center
Yeung, Nick; Botvinick, Matthew M.; Cohen, Jonathan D.
2004-01-01
According to a recent theory, anterior cingulate cortex is sensitive to response conflict, the coactivation of mutually incompatible responses. The present research develops this theory to provide a new account of the error-related negativity (ERN), a scalp potential observed following errors. Connectionist simulations of response conflict in an…
The decline and fall of Type II error rates
Steve Verrill; Mark Durst
2005-01-01
For general linear models with normally distributed random errors, the probability of a Type II error decreases exponentially as a function of sample size. This potentially rapid decline reemphasizes the importance of performing power calculations.
Defining the Relationship Between Human Error Classes and Technology Intervention Strategies
NASA Technical Reports Server (NTRS)
Wiegmann, Douglas A.; Rantanen, Eas M.
2003-01-01
The modus operandi in addressing human error in aviation systems is predominantly that of technological interventions or fixes. Such interventions exhibit considerable variability both in terms of sophistication and application. Some technological interventions address human error directly while others do so only indirectly. Some attempt to eliminate the occurrence of errors altogether whereas others look to reduce the negative consequences of these errors. In any case, technological interventions add to the complexity of the systems and may interact with other system components in unforeseeable ways and often create opportunities for novel human errors. Consequently, there is a need to develop standards for evaluating the potential safety benefit of each of these intervention products so that resources can be effectively invested to produce the biggest benefit to flight safety as well as to mitigate any adverse ramifications. The purpose of this project was to help define the relationship between human error and technological interventions, with the ultimate goal of developing a set of standards for evaluating or measuring the potential benefits of new human error fixes.
Predictability of the Arctic sea ice edge
NASA Astrophysics Data System (ADS)
Goessling, H. F.; Tietsche, S.; Day, J. J.; Hawkins, E.; Jung, T.
2016-02-01
Skillful sea ice forecasts from days to years ahead are becoming increasingly important for the operation and planning of human activities in the Arctic. Here we analyze the potential predictability of the Arctic sea ice edge in six climate models. We introduce the integrated ice-edge error (IIEE), a user-relevant verification metric defined as the area where the forecast and the "truth" disagree on the ice concentration being above or below 15%. The IIEE lends itself to decomposition into an absolute extent error, corresponding to the common sea ice extent error, and a misplacement error. We find that the often-neglected misplacement error makes up more than half of the climatological IIEE. In idealized forecast ensembles initialized on 1 July, the IIEE grows faster than the absolute extent error. This means that the Arctic sea ice edge is less predictable than sea ice extent, particularly in September, with implications for the potential skill of end-user relevant forecasts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagayama, T.; Bailey, J. E.; Loisel, G. P.
Iron opacity calculations presently disagree with measurements at an electron temperature of ~180–195 eV and an electron density of (2–4)×10 22cm –3, conditions similar to those at the base of the solar convection zone. The measurements use x rays to volumetrically heat a thin iron sample that is tamped with low-Z materials. The opacity is inferred from spectrally resolved x-ray transmission measurements. Plasma self-emission, tamper attenuation, and temporal and spatial gradients can all potentially cause systematic errors in the measured opacity spectra. In this article we quantitatively evaluate these potential errors with numerical investigations. The analysis exploits computer simulations thatmore » were previously found to reproduce the experimentally measured plasma conditions. The simulations, combined with a spectral synthesis model, enable evaluations of individual and combined potential errors in order to estimate their potential effects on the opacity measurement. Lastly, the results show that the errors considered here do not account for the previously observed model-data discrepancies.« less
Bellenguez, Céline; Strange, Amy; Freeman, Colin; Donnelly, Peter; Spencer, Chris C A
2012-01-01
High-throughput genotyping arrays provide an efficient way to survey single nucleotide polymorphisms (SNPs) across the genome in large numbers of individuals. Downstream analysis of the data, for example in genome-wide association studies (GWAS), often involves statistical models of genotype frequencies across individuals. The complexities of the sample collection process and the potential for errors in the experimental assay can lead to biases and artefacts in an individual's inferred genotypes. Rather than attempting to model these complications, it has become a standard practice to remove individuals whose genome-wide data differ from the sample at large. Here we describe a simple, but robust, statistical algorithm to identify samples with atypical summaries of genome-wide variation. Its use as a semi-automated quality control tool is demonstrated using several summary statistics, selected to identify different potential problems, and it is applied to two different genotyping platforms and sample collections. The algorithm is written in R and is freely available at www.well.ox.ac.uk/chris-spencer chris.spencer@well.ox.ac.uk Supplementary data are available at Bioinformatics online.
Wagner, Tyler; DeWeber, Jefferson Tyrell; Tsang, Yin-Phan; Krueger, Damon; Whittier, Joanna B.; Infante, Dana M.; Whelan, Gary
2014-01-01
Flow and water temperature are fundamental properties of stream ecosystems upon which many freshwater resource management decisions are based. U.S. Geological Survey (USGS) gages are the most important source of streamflow and water temperature data available nationwide, but the degree to which gages represent landscape attributes of the larger population of streams has not been thoroughly evaluated. We identified substantial biases for seven landscape attributes in one or more regions across the conterminous United States. Streams with small watersheds (<10 km2) and at high elevations were often underrepresented, and biases were greater for water temperature gages and in arid regions. Biases can fundamentally alter management decisions and at a minimum this potential for error must be acknowledged accurately and transparently. We highlight three strategies that seek to reduce bias or limit errors arising from bias and illustrate how one strategy, supplementing USGS data, can greatly reduce bias.
A next generation multiscale view of inborn errors of metabolism
Argmann, Carmen A.; Houten, Sander M.; Zhu, Jun; Schadt, Eric E.
2015-01-01
Inborn errors of metabolism (IEM) are not unlike common diseases. They often present as a spectrum of disease phenotypes that correlates poorly with the severity of the disease-causing mutations. This greatly impacts patient care and reveals fundamental gaps in our knowledge of disease modifying biology. Systems biology approaches that integrate multi-omics data into molecular networks have significantly improved our understanding of complex diseases. Similar approaches to study IEM are rare despite their complex nature. We highlight that existing common disease-derived datasets and networks can be repurposed to generate novel mechanistic insight in IEM and potentially identify candidate modifiers. While understanding disease pathophysiology will advance the IEM field, the ultimate goal should be to understand per individual how their phenotype emerges given their primary mutation on the background of their whole genome, not unlike personalized medicine. We foresee that panomics and network strategies combined with recent experimental innovations will facilitate this. PMID:26712461
Hexagonal Uniformly Redundant Arrays (HURAs) for scintillator based coded aperture neutron imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamage, K.A.A.; Zhou, Q.
2015-07-01
A series of Monte Carlo simulations have been conducted, making use of the EJ-426 neutron scintillator detector, to investigate the potential of using hexagonal uniformly redundant arrays (HURAs) for scintillator based coded aperture neutron imaging. This type of scintillator material has a low sensitivity to gamma rays, therefore, is of particular use in a system with a source that emits both neutrons and gamma rays. The simulations used an AmBe source, neutron images have been produced using different coded-aperture materials (boron- 10, cadmium-113 and gadolinium-157) and location error has also been estimated. In each case the neutron image clearly showsmore » the location of the source with a relatively small location error. Neutron images with high resolution can be easily used to identify and locate nuclear materials precisely in nuclear security and nuclear decommissioning applications. (authors)« less
NASA Technical Reports Server (NTRS)
Pallix, Joan B.; Copeland, Richard A.; Arnold, James O. (Technical Monitor)
1995-01-01
Advanced laser-based diagnostics have been developed to examine catalytic effects and atom/surface interactions on thermal protection materials. This study establishes the feasibility of using laser-induced fluorescence for detection of O and N atom loss in a diffusion tube to measure surface catalytic activity. The experimental apparatus is versatile in that it allows fluorescence detection to be used for measuring species selective recombination coefficients as well as diffusion tube and microwave discharge diagnostics. Many of the potential sources of error in measuring atom recombination coefficients by this method have been identified and taken into account. These include scattered light, detector saturation, sample surface cleanliness, reactor design, gas pressure and composition, and selectivity of the laser probe. Recombination coefficients and their associated errors are reported for N and O atoms on a quartz surface at room temperature.
Validation of a general practice audit and data extraction tool.
Peiris, David; Agaliotis, Maria; Patel, Bindu; Patel, Anushka
2013-11-01
We assessed how accurately a common general practitioner (GP) audit tool extracts data from two software systems. First, pathology test codes were audited at 33 practices covering nine companies. Second, a manual audit of chronic disease data from 200 random patient records at two practices was compared with audit tool data. Pathology review: all companies assigned correct codes for cholesterol, creatinine and glycated haemoglobin; four companies assigned incorrect codes for albuminuria tests, precluding accurate detection with the audit tool. Case record review: there was strong agreement between the manual audit and the tool for all variables except chronic kidney disease diagnoses, which was due to a tool-related programming error. The audit tool accurately detected most chronic disease data in two GP record systems. The one exception, however, highlights the importance of surveillance systems to promptly identify errors. This will maximise potential for audit tools to improve healthcare quality.
NASA Technical Reports Server (NTRS)
Hardy, E. E. (Principal Investigator); Skaley, J. E.; Dawson, C. P.; Weiner, G. D.; Phillips, E. S.; Fisher, R. A.
1975-01-01
The author has identified the following significant results. Three sites were evaluated for land use inventory: Finger Lakes - Tompkins County, Lower Hudson Valley - Newburgh, and Suffolk County - Long Island. Special photo enhancement processes were developed to standardize the density range and contrast among S190A negatives. Enhanced black and white enlargements were converted to color by contact printing onto diazo film. A color prediction model related the density values on each spectral band for each category of land use to the spectral properties of the various diazo dyes. The S190A multispectral system proved to be almost as effective as the S190B high resolution camera for inventorying land use. Aggregate error for Level 1 averaged about 12% while Level 2 aggregate error averaged about 25%. The S190A system proved to be much superior to LANDSAT in inventorying land use, primarily because of increased resolution.
Distributed control of large space antennas
NASA Technical Reports Server (NTRS)
Cameron, J. M.; Hamidi, M.; Lin, Y. H.; Wang, S. J.
1983-01-01
A systematic way to choose control design parameters and to evaluate performance for large space antennas is presented. The structural dynamics and control properties for a Hoop and Column Antenna and a Wrap-Rib Antenna are characterized. Some results of the effects of model parameter uncertainties to the stability, surface accuracy, and pointing errors are presented. Critical dynamics and control problems for these antenna configurations are identified and potential solutions are discussed. It was concluded that structural uncertainties and model error can cause serious performance deterioration and can even destabilize the controllers. For the hoop and column antenna, large hoop and long meat and the lack of stiffness between the two substructures result in low structural frequencies. Performance can be improved if this design can be strengthened. The two-site control system is more robust than either single-site control systems for the hoop and column antenna.
Parallel photonic information processing at gigabyte per second data rates using transient states
NASA Astrophysics Data System (ADS)
Brunner, Daniel; Soriano, Miguel C.; Mirasso, Claudio R.; Fischer, Ingo
2013-01-01
The increasing demands on information processing require novel computational concepts and true parallelism. Nevertheless, hardware realizations of unconventional computing approaches never exceeded a marginal existence. While the application of optics in super-computing receives reawakened interest, new concepts, partly neuro-inspired, are being considered and developed. Here we experimentally demonstrate the potential of a simple photonic architecture to process information at unprecedented data rates, implementing a learning-based approach. A semiconductor laser subject to delayed self-feedback and optical data injection is employed to solve computationally hard tasks. We demonstrate simultaneous spoken digit and speaker recognition and chaotic time-series prediction at data rates beyond 1Gbyte/s. We identify all digits with very low classification errors and perform chaotic time-series prediction with 10% error. Our approach bridges the areas of photonic information processing, cognitive and information science.
Farfán Sedano, Francisco J; Terrón Cuadrado, Marta; Castellanos Clemente, Yolanda; Serrano Balazote, Pablo; Moner Cano, David; Robles Viejo, Montserrat
2011-01-01
The comparison of the patient's current medication list with the medication being ordered when admitted to Hospital, identifying omissions, duplications, dosing errors, and potential interactions, constitutes the core process of medicines reconciliation. Access to the medication the patient is taking at home could be unfeasible as this information is frequently stored in various locations and in diverse proprietary formats. The lack of interoperability between those information systems, namely the Primary Care and the Specialized Electronic Health Records (EHRs), facilitates medication errors and endangers patient safety. Thus, the development of a Patient Summary that includes clinical data from different electronic systems will allow doctors access to relevant information enabling a safer and more efficient assistance. Such a collection of data from heterogeneous and distributed systems has been achieved in this Project through the construction of a federated view based on the ISO/CEN EN13606 Standard for architecture and communication of EHRs.
Medical errors in primary care clinics – a cross sectional study
2012-01-01
Background Patient safety is vital in patient care. There is a lack of studies on medical errors in primary care settings. The aim of the study is to determine the extent of diagnostic inaccuracies and management errors in public funded primary care clinics. Methods This was a cross-sectional study conducted in twelve public funded primary care clinics in Malaysia. A total of 1753 medical records were randomly selected in 12 primary care clinics in 2007 and were reviewed by trained family physicians for diagnostic, management and documentation errors, potential errors causing serious harm and likelihood of preventability of such errors. Results The majority of patient encounters (81%) were with medical assistants. Diagnostic errors were present in 3.6% (95% CI: 2.2, 5.0) of medical records and management errors in 53.2% (95% CI: 46.3, 60.2). For management errors, medication errors were present in 41.1% (95% CI: 35.8, 46.4) of records, investigation errors in 21.7% (95% CI: 16.5, 26.8) and decision making errors in 14.5% (95% CI: 10.8, 18.2). A total of 39.9% (95% CI: 33.1, 46.7) of these errors had the potential to cause serious harm. Problems of documentation including illegible handwriting were found in 98.0% (95% CI: 97.0, 99.1) of records. Nearly all errors (93.5%) detected were considered preventable. Conclusions The occurrence of medical errors was high in primary care clinics particularly with documentation and medication errors. Nearly all were preventable. Remedial intervention addressing completeness of documentation and prescriptions are likely to yield reduction of errors. PMID:23267547
Obstetric Neuraxial Drug Administration Errors: A Quantitative and Qualitative Analytical Review.
Patel, Santosh; Loveridge, Robert
2015-12-01
Drug administration errors in obstetric neuraxial anesthesia can have devastating consequences. Although fully recognizing that they represent "only the tip of the iceberg," published case reports/series of these errors were reviewed in detail with the aim of estimating the frequency and the nature of these errors. We identified case reports and case series from MEDLINE and performed a quantitative analysis of the involved drugs, error setting, source of error, the observed complications, and any therapeutic interventions. We subsequently performed a qualitative analysis of the human factors involved and proposed modifications to practice. Twenty-nine cases were identified. Various drugs were given in error, but no direct effects on the course of labor, mode of delivery, or neonatal outcome were reported. Four maternal deaths from the accidental intrathecal administration of tranexamic acid were reported, all occurring after delivery of the fetus. A range of hemodynamic and neurologic signs and symptoms were noted, but the most commonly reported complication was the failure of the intended neuraxial anesthetic technique. Several human factors were present; most common factors were drug storage issues and similar drug appearance. Four practice recommendations were identified as being likely to have prevented the errors. The reported errors exposed latent conditions within health care systems. We suggest that the implementation of the following processes may decrease the risk of these types of drug errors: (1) Careful reading of the label on any drug ampule or syringe before the drug is drawn up or injected; (2) labeling all syringes; (3) checking labels with a second person or a device (such as a barcode reader linked to a computer) before the drug is drawn up or administered; and (4) use of non-Luer lock connectors on all epidural/spinal/combined spinal-epidural devices. Further study is required to determine whether routine use of these processes will reduce drug error.
NASA Astrophysics Data System (ADS)
Yu, H.; Russell, A. G.; Mulholland, J. A.
2017-12-01
In air pollution epidemiologic studies with spatially resolved air pollution data, exposures are often estimated using the home locations of individual subjects. Due primarily to lack of data or logistic difficulties, the spatiotemporal mobility of subjects are mostly neglected, which are expected to result in exposure misclassification errors. In this study, we applied detailed cell phone location data to characterize potential exposure misclassification errors associated with home-based exposure estimation of air pollution. The cell phone data sample consists of 9,886 unique simcard IDs collected on one mid-week day in October, 2013 from Shenzhen, China. The Community Multi-scale Air Quality model was used to simulate hourly ambient concentrations of six chosen pollutants at 3 km spatial resolution, which were then fused with observational data to correct for potential modeling biases and errors. Air pollution exposure for each simcard ID was estimated by matching hourly pollutant concentrations with detailed location data for corresponding IDs. Finally, the results were compared with exposure estimates obtained using the home location method to assess potential exposure misclassification errors. Our results show that the home-based method is likely to have substantial exposure misclassification errors, over-estimating exposures for subjects with higher exposure levels and under-estimating exposures for those with lower exposure levels. This has the potential to lead to a bias-to-the-null in the health effect estimates. Our findings suggest that the use of cell phone data has the potential for improving the characterization of exposure and exposure misclassification in air pollution epidemiology studies.
Adaptation to sensory-motor reflex perturbations is blind to the source of errors.
Hudson, Todd E; Landy, Michael S
2012-01-06
In the study of visual-motor control, perhaps the most familiar findings involve adaptation to externally imposed movement errors. Theories of visual-motor adaptation based on optimal information processing suppose that the nervous system identifies the sources of errors to effect the most efficient adaptive response. We report two experiments using a novel perturbation based on stimulating a visually induced reflex in the reaching arm. Unlike adaptation to an external force, our method induces a perturbing reflex within the motor system itself, i.e., perturbing forces are self-generated. This novel method allows a test of the theory that error source information is used to generate an optimal adaptive response. If the self-generated source of the visually induced reflex perturbation is identified, the optimal response will be via reflex gain control. If the source is not identified, a compensatory force should be generated to counteract the reflex. Gain control is the optimal response to reflex perturbation, both because energy cost and movement errors are minimized. Energy is conserved because neither reflex-induced nor compensatory forces are generated. Precision is maximized because endpoint variance is proportional to force production. We find evidence against source-identified adaptation in both experiments, suggesting that sensory-motor information processing is not always optimal.
Error rates in forensic DNA analysis: definition, numbers, impact and communication.
Kloosterman, Ate; Sjerps, Marjan; Quak, Astrid
2014-09-01
Forensic DNA casework is currently regarded as one of the most important types of forensic evidence, and important decisions in intelligence and justice are based on it. However, errors occasionally occur and may have very serious consequences. In other domains, error rates have been defined and published. The forensic domain is lagging behind concerning this transparency for various reasons. In this paper we provide definitions and observed frequencies for different types of errors at the Human Biological Traces Department of the Netherlands Forensic Institute (NFI) over the years 2008-2012. Furthermore, we assess their actual and potential impact and describe how the NFI deals with the communication of these numbers to the legal justice system. We conclude that the observed relative frequency of quality failures is comparable to studies from clinical laboratories and genetic testing centres. Furthermore, this frequency is constant over the five-year study period. The most common causes of failures related to the laboratory process were contamination and human error. Most human errors could be corrected, whereas gross contamination in crime samples often resulted in irreversible consequences. Hence this type of contamination is identified as the most significant source of error. Of the known contamination incidents, most were detected by the NFI quality control system before the report was issued to the authorities, and thus did not lead to flawed decisions like false convictions. However in a very limited number of cases crucial errors were detected after the report was issued, sometimes with severe consequences. Many of these errors were made in the post-analytical phase. The error rates reported in this paper are useful for quality improvement and benchmarking, and contribute to an open research culture that promotes public trust. However, they are irrelevant in the context of a particular case. Here case-specific probabilities of undetected errors are needed. These should be reported, separately from the match probability, when requested by the court or when there are internal or external indications for error. It should also be made clear that there are various other issues to consider, like DNA transfer. Forensic statistical models, in particular Bayesian networks, may be useful to take the various uncertainties into account and demonstrate their effects on the evidential value of the forensic DNA results. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Investigation of technology needs for avoiding helicopter pilot error related accidents
NASA Technical Reports Server (NTRS)
Chais, R. I.; Simpson, W. E.
1985-01-01
Pilot error which is cited as a cause or related factor in most rotorcraft accidents was examined. Pilot error related accidents in helicopters to identify areas in which new technology could reduce or eliminate the underlying causes of these human errors were investigated. The aircraft accident data base at the U.S. Army Safety Center was studied as the source of data on helicopter accidents. A randomly selected sample of 110 aircraft records were analyzed on a case-by-case basis to assess the nature of problems which need to be resolved and applicable technology implications. Six technology areas in which there appears to be a need for new or increased emphasis are identified.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Noel, Camille E.; Gutti, VeeraRajesh; Bosch, Walter
Purpose: To quantify the potential impact of the Integrating the Healthcare Enterprise–Radiation Oncology Quality Assurance with Plan Veto (QAPV) on patient safety of external beam radiation therapy (RT) operations. Methods and Materials: An institutional database of events (errors and near-misses) was used to evaluate the ability of QAPV to prevent clinically observed events. We analyzed reported events that were related to Digital Imaging and Communications in Medicine RT plan parameter inconsistencies between the intended treatment (on the treatment planning system) and the delivered treatment (on the treatment machine). Critical Digital Imaging and Communications in Medicine RT plan parameters were identified.more » Each event was scored for importance using the Failure Mode and Effects Analysis methodology. Potential error occurrence (frequency) was derived according to the collected event data, along with the potential event severity, and the probability of detection with and without the theoretical implementation of the QAPV plan comparison check. Failure Mode and Effects Analysis Risk Priority Numbers (RPNs) with and without QAPV were compared to quantify the potential benefit of clinical implementation of QAPV. Results: The implementation of QAPV could reduce the RPN values for 15 of 22 (71%) of evaluated parameters, with an overall average reduction in RPN of 68 (range, 0-216). For the 6 high-risk parameters (>200), the average reduction in RPN value was 163 (range, 108-216). The RPN value reduction for the intermediate-risk (200 > RPN > 100) parameters was (0-140). With QAPV, the largest RPN value for “Beam Meterset” was reduced from 324 to 108. The maximum reduction in RPN value was for Beam Meterset (216, 66.7%), whereas the maximum percentage reduction was for Cumulative Meterset Weight (80, 88.9%). Conclusion: This analysis quantifies the value of the Integrating the Healthcare Enterprise–Radiation Oncology QAPV implementation in clinical workflow. We demonstrate that although QAPV does not provide a comprehensive solution for error prevention in RT, it can have a significant impact on a subset of the most severe clinically observed events.« less
The Use of Neural Networks in Identifying Error Sources in Satellite-Derived Tropical SST Estimates
Lee, Yung-Hsiang; Ho, Chung-Ru; Su, Feng-Chun; Kuo, Nan-Jung; Cheng, Yu-Hsin
2011-01-01
An neural network model of data mining is used to identify error sources in satellite-derived tropical sea surface temperature (SST) estimates from thermal infrared sensors onboard the Geostationary Operational Environmental Satellite (GOES). By using the Back Propagation Network (BPN) algorithm, it is found that air temperature, relative humidity, and wind speed variation are the major factors causing the errors of GOES SST products in the tropical Pacific. The accuracy of SST estimates is also improved by the model. The root mean square error (RMSE) for the daily SST estimate is reduced from 0.58 K to 0.38 K and mean absolute percentage error (MAPE) is 1.03%. For the hourly mean SST estimate, its RMSE is also reduced from 0.66 K to 0.44 K and the MAPE is 1.3%. PMID:22164030
Clarke, D L; Kong, V Y; Naidoo, L C; Furlong, H; Aldous, C
2013-01-01
Acute surgical patients are particularly vulnerable to human error. The Acute Physiological Support Team (APST) was created with the twin objectives of identifying high-risk acute surgical patients in the general wards and reducing both the incidence of error and impact of error on these patients. A number of error taxonomies were used to understand the causes of human error and a simple risk stratification system was adopted to identify patients who are particularly at risk of error. During the period November 2012-January 2013 a total of 101 surgical patients were cared for by the APST at Edendale Hospital. The average age was forty years. There were 36 females and 65 males. There were 66 general surgical patients and 35 trauma patients. Fifty-six patients were referred on the day of their admission. The average length of stay in the APST was four days. Eleven patients were haemo-dynamically unstable on presentation and twelve were clinically septic. The reasons for referral were sepsis,(4) respiratory distress,(3) acute kidney injury AKI (38), post-operative monitoring (39), pancreatitis,(3) ICU down-referral,(7) hypoxia,(5) low GCS,(1) coagulopathy.(1) The mortality rate was 13%. A total of thirty-six patients experienced 56 errors. A total of 143 interventions were initiated by the APST. These included institution or adjustment of intravenous fluids (101), blood transfusion,(12) antibiotics,(9) the management of neutropenic sepsis,(1) central line insertion,(3) optimization of oxygen therapy,(7) correction of electrolyte abnormality,(8) correction of coagulopathy.(2) CONCLUSION: Our intervention combined current taxonomies of error with a simple risk stratification system and is a variant of the defence in depth strategy of error reduction. We effectively identified and corrected a significant number of human errors in high-risk acute surgical patients. This audit has helped understand the common sources of error in the general surgical wards and will inform on-going error reduction initiatives. Copyright © 2013 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.
On the Limitations of Variational Bias Correction
NASA Technical Reports Server (NTRS)
Moradi, Isaac; Mccarty, Will; Gelaro, Ronald
2018-01-01
Satellite radiances are the largest dataset assimilated into Numerical Weather Prediction (NWP) models, however the data are subject to errors and uncertainties that need to be accounted for before assimilating into the NWP models. Variational bias correction uses the time series of observation minus background to estimate the observations bias. This technique does not distinguish between the background error, forward operator error, and observations error so that all these errors are summed up together and counted as observation error. We identify some sources of observations errors (e.g., antenna emissivity, non-linearity in the calibration, and antenna pattern) and show the limitations of variational bias corrections on estimating these errors.
Hickey, Edward J; Nosikova, Yaroslavna; Pham-Hung, Eric; Gritti, Michael; Schwartz, Steven; Caldarone, Christopher A; Redington, Andrew; Van Arsdell, Glen S
2015-02-01
We hypothesized that the National Aeronautics and Space Administration "threat and error" model (which is derived from analyzing >30,000 commercial flights, and explains >90% of crashes) is directly applicable to pediatric cardiac surgery. We implemented a unit-wide performance initiative, whereby every surgical admission constitutes a "flight" and is tracked in real time, with the aim of identifying errors. The first 500 consecutive patients (524 flights) were analyzed, with an emphasis on the relationship between error cycles and permanent harmful outcomes. Among 524 patient flights (risk adjustment for congenital heart surgery category: 1-6; median: 2) 68 (13%) involved residual hemodynamic lesions, 13 (2.5%) permanent end-organ injuries, and 7 deaths (1.3%). Preoperatively, 763 threats were identified in 379 (72%) flights. Only 51% of patient flights (267) were error free. In the remaining 257 flights, 430 errors occurred, most commonly related to proficiency (280; 65%) or judgment (69, 16%). In most flights with errors (173 of 257; 67%), an unintended clinical state resulted, ie, the error was consequential. In 60% of consequential errors (n = 110; 21% of total), subsequent cycles of additional error/unintended states occurred. Cycles, particularly those containing multiple errors, were very significantly associated with permanent harmful end-states, including residual hemodynamic lesions (P < .0001), end-organ injury (P < .0001), and death (P < .0001). Deaths were almost always preceded by cycles (6 of 7; P < .0001). Human error, if not mitigated, often leads to cycles of error and unintended patient states, which are dangerous and precede the majority of harmful outcomes. Efforts to manage threats and error cycles (through crew resource management techniques) are likely to yield large increases in patient safety. Copyright © 2015. Published by Elsevier Inc.
Prevalence and pattern of prescription errors in a Nigerian kidney hospital.
Babatunde, Kehinde M; Akinbodewa, Akinwumi A; Akinboye, Ayodele O; Adejumo, Ademola O
2016-12-01
To determine (i) the prevalence and pattern of prescription errors in our Centre and, (ii) appraise pharmacists' intervention and correction of identified prescription errors. A descriptive, single blinded cross-sectional study. Kidney Care Centre is a public Specialist hospital. The monthly patient load averages 60 General Out-patient cases and 17.4 in-patients. A total of 31 medical doctors (comprising of 2 Consultant Nephrologists, 15 Medical Officers, 14 House Officers), 40 nurses and 24 ward assistants participated in the study. One pharmacist runs the daily call schedule. Prescribers were blinded to the study. Prescriptions containing only galenicals were excluded. An error detection mechanism was set up to identify and correct prescription errors. Life-threatening prescriptions were discussed with the Quality Assurance Team of the Centre who conveyed such errors to the prescriber without revealing the on-going study. Prevalence of prescription errors, pattern of prescription errors, pharmacist's intervention. A total of 2,660 (75.0%) combined prescription errors were found to have one form of error or the other; illegitimacy 1,388 (52.18%), omission 1,221(45.90%), wrong dose 51(1.92%) and no error of style was detected. Life-threatening errors were low (1.1-2.2%). Errors were found more commonly among junior doctors and non-medical doctors. Only 56 (1.6%) of the errors were detected and corrected during the process of dispensing. Prescription errors related to illegitimacy and omissions were highly prevalent. There is a need to improve on patient-to-healthcare giver ratio. A medication quality assurance unit is needed in our hospitals. No financial support was received by any of the authors for this study.
Saeed, Mohammad
2017-05-01
Systemic lupus erythematosus (SLE) is a complex disorder. Genetic association studies of complex disorders suffer from the following three major issues: phenotypic heterogeneity, false positive (type I error), and false negative (type II error) results. Hence, genes with low to moderate effects are missed in standard analyses, especially after statistical corrections. OASIS is a novel linkage disequilibrium clustering algorithm that can potentially address false positives and negatives in genome-wide association studies (GWAS) of complex disorders such as SLE. OASIS was applied to two SLE dbGAP GWAS datasets (6077 subjects; ∼0.75 million single-nucleotide polymorphisms). OASIS identified three known SLE genes viz. IFIH1, TNIP1, and CD44, not previously reported using these GWAS datasets. In addition, 22 novel loci for SLE were identified and the 5 SLE genes previously reported using these datasets were verified. OASIS methodology was validated using single-variant replication and gene-based analysis with GATES. This led to the verification of 60% of OASIS loci. New SLE genes that OASIS identified and were further verified include TNFAIP6, DNAJB3, TTF1, GRIN2B, MON2, LATS2, SNX6, RBFOX1, NCOA3, and CHAF1B. This study presents the OASIS algorithm, software, and the meta-analyses of two publicly available SLE GWAS datasets along with the novel SLE genes. Hence, OASIS is a novel linkage disequilibrium clustering method that can be universally applied to existing GWAS datasets for the identification of new genes.
Dental Students' Interpretations of Digital Panoramic Radiographs on Completely Edentate Patients.
Kratz, Richard J; Nguyen, Caroline T; Walton, Joanne N; MacDonald, David
2018-03-01
The ability of dental students to interpret digital panoramic radiographs (PANs) of edentulous patients has not been documented. The aim of this retrospective study was to compare the ability of second-year (D2) dental students with that of third- and fourth-year (D3-D4) dental students to interpret and identify positional errors in digital PANs obtained from patients with complete edentulism. A total of 169 digital PANs from edentulous patients were assessed by D2 (n=84) and D3-D4 (n=85) dental students at one Canadian dental school. The correctness of the students' interpretations was determined by comparison to a gold standard established by assessments of the same PANs by two experts (a graduate student in prosthodontics and an oral and maxillofacial radiologist). Data collected were from September 1, 2006, when digital radiography was implemented at the university, to December 31, 2012. Nearly all (95%) of the PANs were acceptable diagnostically despite a high proportion (92%) of positional errors detected. A total of 301 positional errors were identified in the sample. The D2 students identified significantly more (p=0.002) positional errors than the D3-D4 students. There was no significant difference (p=0.059) in the distribution of radiographic interpretation errors between the two student groups when compared to the gold standard. Overall, the category of extragnathic findings had the highest number of false negatives (43) reported. In this study, dental students interpreted digital PANs of edentulous patients satisfactorily, but they were more adept at identifying radiographic findings compared to positional errors. Students should be reminded to examine the entire radiograph thoroughly to ensure extragnathic findings are not missed and to recognize and report patient positional errors.
Menelaou, Evdokia; Paul, Latoya T.; Perera, Surangi N.; Svoboda, Kurt R.
2015-01-01
Nicotine exposure during embryonic stages of development can affect many neurodevelopmental processes. In the developing zebrafish, exposure to nicotine was reported to cause axonal pathfinding errors in the later born secondary motoneurons (SMN). These alterations in SMN axon morphology coincided with muscle degeneration at high nicotine concentrations (15–30µM). Previous work showed that the paralytic mutant zebrafish known as sofa potato, exhibited nicotine-induced effects onto SMN axons at these high concentrations but in the absence of any muscle deficits, indicating that pathfinding errors could occur independent of muscle effects. In this study, we used varying concentrations of nicotine at different developmental windows of exposure to specifically isolate its effects onto subpopulations of motoneuron axons. We found that nicotine exposure can affect SMN axon morphology in a dose-dependent manner. At low concentrations of nicotine, SMN axons exhibited pathfinding errors, in the absence of any nicotine-induced muscle abnormalities. Moreover, the nicotine exposure paradigms used affected the 3 subpopulations of SMN axons differently, but the dorsal projecting SMN axons were primarily affected. We then identified morphologically distinct pathfinding errors that best described the nicotine-induced effects on dorsal projecting SMN axons. To test whether SMN pathfinding was potentially influenced by alterations in the early born primary motoneuron (PMN), we performed dual labeling studies, where both PMN and SMN axons were simultaneously labeled with antibodies. We show that only a subset of the SMN axon pathfinding errors coincided with abnormal PMN axonal targeting in nicotine-exposed zebrafish. We conclude that nicotine exposure can exert differential effects depending on the levels of nicotine and developmental exposure window. PMID:25668718
NASA Astrophysics Data System (ADS)
Sturtevant, John L.; Liubich, Vlad; Gupta, Rachit
2016-04-01
Edge placement error (EPE) was a term initially introduced to describe the difference between predicted pattern contour edge and the design target for a single design layer. Strictly speaking, this quantity is not directly measurable in the fab. What is of vital importance is the relative edge placement errors between different design layers, and in the era of multipatterning, the different constituent mask sublayers for a single design layer. The critical dimensions (CD) and overlay between two layers can be measured in the fab, and there has always been a strong emphasis on control of overlay between design layers. The progress in this realm has been remarkable, accelerated in part at least by the proliferation of multipatterning, which reduces the available overlay budget by introducing a coupling of overlay and CD errors for the target layer. Computational lithography makes possible the full-chip assessment of two-layer edge to edge distances and two-layer contact overlap area. We will investigate examples of via-metal model-based analysis of CD and overlay errors. We will investigate both single patterning and double patterning. For single patterning, we show the advantage of contour-to-contour simulation over contour to target simulation, and how the addition of aberrations in the optical models can provide a more realistic CD-overlay process window (PW) for edge placement errors. For double patterning, the interaction of 4-layer CD and overlay errors is very complex, but we illustrate that not only can full-chip verification identify potential two-layer hotspots, the optical proximity correction engine can act to mitigate such hotspots and enlarge the joint CD-overlay PW.
Using snowball sampling method with nurses to understand medication administration errors.
Sheu, Shuh-Jen; Wei, Ien-Lan; Chen, Ching-Huey; Yu, Shu; Tang, Fu-In
2009-02-01
We aimed to encourage nurses to release information about drug administration errors to increase understanding of error-related circumstances and to identify high-alert situations. Drug administration errors represent the majority of medication errors, but errors are underreported. Effective ways are lacking to encourage nurses to actively report errors. Snowball sampling was conducted to recruit participants. A semi-structured questionnaire was used to record types of error, hospital and nurse backgrounds, patient consequences, error discovery mechanisms and reporting rates. Eighty-five nurses participated, reporting 328 administration errors (259 actual, 69 near misses). Most errors occurred in medical surgical wards of teaching hospitals, during day shifts, committed by nurses working fewer than two years. Leading errors were wrong drugs and doses, each accounting for about one-third of total errors. Among 259 actual errors, 83.8% resulted in no adverse effects; among remaining 16.2%, 6.6% had mild consequences and 9.6% had serious consequences (severe reaction, coma, death). Actual errors and near misses were discovered mainly through double-check procedures by colleagues and nurses responsible for errors; reporting rates were 62.5% (162/259) vs. 50.7% (35/69) and only 3.5% (9/259) vs. 0% (0/69) were disclosed to patients and families. High-alert situations included administration of 15% KCl, insulin and Pitocin; using intravenous pumps; and implementation of cardiopulmonary resuscitation (CPR). Snowball sampling proved to be an effective way to encourage nurses to release details concerning medication errors. Using empirical data, we identified high-alert situations. Strategies for reducing drug administration errors by nurses are suggested. Survey results suggest that nurses should double check medication administration in known high-alert situations. Nursing management can use snowball sampling to gather error details from nurses in a non-reprimanding atmosphere, helping to establish standard operational procedures for known high-alert situations.
Error Pattern Analysis Applied to Technical Writing: An Editor's Guide for Writers.
ERIC Educational Resources Information Center
Monagle, E. Brette
The use of error pattern analysis can reduce the time and money spent on editing and correcting manuscripts. What is required is noting, classifying, and keeping a frequency count of errors. First an editor should take a typical page of writing and circle each error. After the editor has done a sufficiently large number of pages to identify an…
ERIC Educational Resources Information Center
Ulu, Mustafa
2017-01-01
This study aims to identify errors made by primary school students when modelling word problems and to eliminate those errors through scaffolding. A 10-question problem-solving achievement test was used in the research. The qualitative and quantitative designs were utilized together. The study group of the quantitative design comprises 248…
McInnis, Ian; Murray, Sarah J; Serio-Melvin, Maria; Aden, James K; Mann-Salinas, Elizabeth; Chung, Kevin K; Huzar, Todd; Wolf, Steven; Nemeth, Christopher; Pamplin, Jeremy C
Multidisciplinary rounds (MDRs) in the burn intensive care unit serve as an efficient means for clinicians to assess patient status and establish patient care priorities. Both tasks require significant cognitive work, the magnitude of which is relevant because increased cognitive work of task completion has been associated with increased error rates. We sought to quantify this workload during MDR using the National Aeronautics and Space Administration Task Load Index (NASA-TLX). Research staff at three academic regional referral burn centers administered the NASA-TLX to clinicians during MDR. Clinicians assessed their workload associated with 1) "Identify(ing) if the patient is better, same, or worse than yesterday" and 2) "Identify(ing) the most important objectives of care for the patient today." Data were collected on clinician type, years of experience, and hours of direct patient care. Surveys were administered to 116 total clinicians, 41 physicians, 25 nurses, 13 medical students, and 37 clinicians in other roles. Clinicians with less experience reported more cognitive work when completing both tasks (P < .005). Clinicians in the "others" group (respiratory therapists, dieticians, pharmacists, etc.) reported less cognitive work than all other groups for both tasks (P < .05). The NASA-TLX was an effective tool for collecting perceptions of cognitive workload associated with MDR. Perceived cognitive work varied by clinician type and experience level when completing two key tasks. Less experience was associated with increased perceived work, potentially increasing mental error rates, and increasing risk to patients. Creating tools or work processes to reduce cognitive work may improve clinician performance.
Confidential Clinician-reported Surveillance of Adverse Events Among Medical Inpatients
Weingart, Saul N; Ship, Amy N; Aronson, Mark D
2000-01-01
BACKGROUND Although iatrogenic injury poses a significant risk to hospitalized patients, detection of adverse events (AEs) is costly and difficult. METHODS The authors developed a confidential reporting method for detecting AEs on a medicine unit of a teaching hospital. Adverse events were defined as patient injuries. Potential adverse events (PAEs) represented errors that could have, but did not result in harm. Investigators interviewed house officers during morning rounds and by e-mail, asking them to identify obstacles to high quality care and iatrogenic injuries. They compared house officer reports with hospital incident reports and patients' medical records. A multivariate regression model identified correlates of reporting. RESULTS One hundred ten events occurred, affecting 84 patients. Queries by e-mail (incidence rate ratio [IRR ]=0.16; 95% confidence interval [95% CI], 0.05 to 0.49) and on days when house officers rotated to a new service (IRR =0.12; 95% CI, 0.02 to 0.91) resulted in fewer reports. The most commonly reported process of care problems were inadequate evaluation of the patient (16.4%), failure to monitor or follow up (12.7%), and failure of the laboratory to perform a test (12.7%). Respondents identified 29 (26.4%) AEs, 52 (47.3%) PAEs, and 29 (26.4%) other house officer-identified quality problems. An AE occurred in 2.6% of admissions. The hospital incident reporting system detected only one house officer-reported event. Chart review corroborated 72.9% of events. CONCLUSIONS House officers detect many AEs among inpatients. Confidential peer interviews of front-line providers is a promising method for identifying medical errors and substandard quality. PMID:10940133
ERIC Educational Resources Information Center
Shear, Benjamin R.; Zumbo, Bruno D.
2013-01-01
Type I error rates in multiple regression, and hence the chance for false positive research findings, can be drastically inflated when multiple regression models are used to analyze data that contain random measurement error. This article shows the potential for inflated Type I error rates in commonly encountered scenarios and provides new…