Sample records for technical human errors

  1. Technical approaches for measurement of human errors

    NASA Technical Reports Server (NTRS)

    Clement, W. F.; Heffley, R. K.; Jewell, W. F.; Mcruer, D. T.

    1980-01-01

    Human error is a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents. The technical details of a variety of proven approaches for the measurement of human errors in the context of the national airspace system are presented. Unobtrusive measurements suitable for cockpit operations and procedures in part of full mission simulation are emphasized. Procedure, system performance, and human operator centered measurements are discussed as they apply to the manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations.

  2. Avoiding Human Error in Mission Operations: Cassini Flight Experience

    NASA Technical Reports Server (NTRS)

    Burk, Thomas A.

    2012-01-01

    Operating spacecraft is a never-ending challenge and the risk of human error is ever- present. Many missions have been significantly affected by human error on the part of ground controllers. The Cassini mission at Saturn has not been immune to human error, but Cassini operations engineers use tools and follow processes that find and correct most human errors before they reach the spacecraft. What is needed are skilled engineers with good technical knowledge, good interpersonal communications, quality ground software, regular peer reviews, up-to-date procedures, as well as careful attention to detail and the discipline to test and verify all commands that will be sent to the spacecraft. Two areas of special concern are changes to flight software and response to in-flight anomalies. The Cassini team has a lot of practical experience in all these areas and they have found that well-trained engineers with good tools who follow clear procedures can catch most errors before they get into command sequences to be sent to the spacecraft. Finally, having a robust and fault-tolerant spacecraft that allows ground controllers excellent visibility of its condition is the most important way to ensure human error does not compromise the mission.

  3. Localizer Flight Technical Error Measurement and Uncertainty

    DOT National Transportation Integrated Search

    2011-09-18

    Recent United States Federal Aviation Administration (FAA) wake turbulence research conducted at the John A. Volpe National Transportation Systems Center (The Volpe Center) has continued to monitor the representative localizer Flight Technical Error ...

  4. Human Error: A Concept Analysis

    NASA Technical Reports Server (NTRS)

    Hansen, Frederick D.

    2007-01-01

    Human error is the subject of research in almost every industry and profession of our times. This term is part of our daily language and intuitively understood by most people however, it would be premature to assume that everyone's understanding of human error s the same. For example, human error is used to describe the outcome or consequence of human action, the causal factor of an accident, deliberate violations,a nd the actual action taken by a human being. As a result, researchers rarely agree on the either a specific definition or how to prevent human error. The purpose of this article is to explore the specific concept of human error using Concept Analysis as described by Walker and Avant (1995). The concept of human error is examined as currently used in the literature of a variety of industries and professions. Defining attributes and examples of model, borderline, and contrary cases are described. The antecedents and consequences of human error are also discussed and a definition of human error is offered.

  5. Operational Interventions to Maintenance Error

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Dulchinos, VIcki

    1997-01-01

    A significant proportion of aviation accidents and incidents are known to be tied to human error. However, research of flight operational errors has shown that so-called pilot error often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the team7 concept for maintenance operations and in tailoring programs to fit the needs of technical opeRAtions. Nevertheless, there remains a dual challenge: 1) to develop human factors interventions which are directly supported by reliable human error data, and 2) to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  6. Electronic inventory systems and barcode technology: impact on pharmacy technical accuracy and error liability.

    PubMed

    Oldland, Alan R; Golightly, Larry K; May, Sondra K; Barber, Gerard R; Stolpman, Nancy M

    2015-01-01

    To measure the effects associated with sequential implementation of electronic medication storage and inventory systems and product verification devices on pharmacy technical accuracy and rates of potential medication dispensing errors in an academic medical center. During four 28-day periods of observation, pharmacists recorded all technical errors identified at the final visual check of pharmaceuticals prior to dispensing. Technical filling errors involving deviations from order-specific selection of product, dosage form, strength, or quantity were documented when dispensing medications using (a) a conventional unit dose (UD) drug distribution system, (b) an electronic storage and inventory system utilizing automated dispensing cabinets (ADCs) within the pharmacy, (c) ADCs combined with barcode (BC) verification, and (d) ADCs and BC verification utilized with changes in product labeling and individualized personnel training in systems application. Using a conventional UD system, the overall incidence of technical error was 0.157% (24/15,271). Following implementation of ADCs, the comparative overall incidence of technical error was 0.135% (10/7,379; P = .841). Following implementation of BC scanning, the comparative overall incidence of technical error was 0.137% (27/19,708; P = .729). Subsequent changes in product labeling and intensified staff training in the use of BC systems was associated with a decrease in the rate of technical error to 0.050% (13/26,200; P = .002). Pharmacy ADCs and BC systems provide complementary effects that improve technical accuracy and reduce the incidence of potential medication dispensing errors if this technology is used with comprehensive personnel training.

  7. Electronic Inventory Systems and Barcode Technology: Impact on Pharmacy Technical Accuracy and Error Liability

    PubMed Central

    Oldland, Alan R.; May, Sondra K.; Barber, Gerard R.; Stolpman, Nancy M.

    2015-01-01

    Purpose: To measure the effects associated with sequential implementation of electronic medication storage and inventory systems and product verification devices on pharmacy technical accuracy and rates of potential medication dispensing errors in an academic medical center. Methods: During four 28-day periods of observation, pharmacists recorded all technical errors identified at the final visual check of pharmaceuticals prior to dispensing. Technical filling errors involving deviations from order-specific selection of product, dosage form, strength, or quantity were documented when dispensing medications using (a) a conventional unit dose (UD) drug distribution system, (b) an electronic storage and inventory system utilizing automated dispensing cabinets (ADCs) within the pharmacy, (c) ADCs combined with barcode (BC) verification, and (d) ADCs and BC verification utilized with changes in product labeling and individualized personnel training in systems application. Results: Using a conventional UD system, the overall incidence of technical error was 0.157% (24/15,271). Following implementation of ADCs, the comparative overall incidence of technical error was 0.135% (10/7,379; P = .841). Following implementation of BC scanning, the comparative overall incidence of technical error was 0.137% (27/19,708; P = .729). Subsequent changes in product labeling and intensified staff training in the use of BC systems was associated with a decrease in the rate of technical error to 0.050% (13/26,200; P = .002). Conclusions: Pharmacy ADCs and BC systems provide complementary effects that improve technical accuracy and reduce the incidence of potential medication dispensing errors if this technology is used with comprehensive personnel training. PMID:25684799

  8. Systematic analysis of video data from different human-robot interaction studies: a categorization of social signals during error situations.

    PubMed

    Giuliani, Manuel; Mirnig, Nicole; Stollnberger, Gerald; Stadler, Susanne; Buchner, Roland; Tscheligi, Manfred

    2015-01-01

    Human-robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human-robot interaction experiments. For that, we analyzed 201 videos of five human-robot interaction user studies with varying tasks from four independent projects. The analysis shows that there are two types of error situations: social norm violations and technical failures. Social norm violations are situations in which the robot does not adhere to the underlying social script of the interaction. Technical failures are caused by technical shortcomings of the robot. The results of the video analysis show that the study participants use many head movements and very few gestures, but they often smile, when in an error situation with the robot. Another result is that the participants sometimes stop moving at the beginning of error situations. We also found that the participants talked more in the case of social norm violations and less during technical failures. Finally, the participants use fewer non-verbal social signals (for example smiling, nodding, and head shaking), when they are interacting with the robot alone and no experimenter or other human is present. The results suggest that participants do not see the robot as a social interaction partner with comparable communication skills. Our findings have implications for builders and evaluators of human-robot interaction systems. The builders need to consider including modules for recognition and classification of head movements to the robot input channels. The evaluators need to make sure that the presence of an experimenter does not skew the results of their user studies.

  9. The Relationship Between Technical Errors and Decision Making Skills in the Junior Resident

    PubMed Central

    Nathwani, J. N.; Fiers, R.M.; Ray, R.D.; Witt, A.K.; Law, K. E.; DiMarco, S.M.; Pugh, C.M.

    2017-01-01

    Objective The purpose of this study is to co-evaluate resident technical errors and decision-making capabilities during placement of a subclavian central venous catheter (CVC). We hypothesize that there will be significant correlations between scenario based decision making skills, and technical proficiency in central line insertion. We also predict residents will have problems in anticipating common difficulties and generating solutions associated with line placement. Design Participants were asked to insert a subclavian central line on a simulator. After completion, residents were presented with a real life patient photograph depicting CVC placement and asked to anticipate difficulties and generate solutions. Error rates were analyzed using chi-square tests and a 5% expected error rate. Correlations were sought by comparing technical errors and scenario based decision making. Setting This study was carried out at seven tertiary care centers. Participants Study participants (N=46) consisted of largely first year research residents that could be followed longitudinally. Second year research and clinical residents were not excluded. Results Six checklist errors were committed more often than anticipated. Residents performed an average of 1.9 errors, significantly more than the 1 error, at most, per person expected (t(44)=3.82, p<.001). The most common error was performance of the procedure steps in the wrong order (28.5%, P<.001). Some of the residents (24%) had no errors, 30% committed one error, and 46 % committed more than one error. The number of technical errors committed negatively correlated with the total number of commonly identified difficulties and generated solutions (r(33)= −.429, p=.021, r(33)= −.383, p=.044 respectively). Conclusions Almost half of the surgical residents committed multiple errors while performing subclavian CVC placement. The correlation between technical errors and decision making skills suggests a critical need to train

  10. A stochastic dynamic model for human error analysis in nuclear power plants

    NASA Astrophysics Data System (ADS)

    Delgado-Loperena, Dharma

    Nuclear disasters like Three Mile Island and Chernobyl indicate that human performance is a critical safety issue, sending a clear message about the need to include environmental press and competence aspects in research. This investigation was undertaken to serve as a roadmap for studying human behavior through the formulation of a general solution equation. The theoretical model integrates models from two heretofore-disassociated disciplines (behavior specialists and technical specialists), that historically have independently studied the nature of error and human behavior; including concepts derived from fractal and chaos theory; and suggests re-evaluation of base theory regarding human error. The results of this research were based on comprehensive analysis of patterns of error, with the omnipresent underlying structure of chaotic systems. The study of patterns lead to a dynamic formulation, serving for any other formula used to study human error consequences. The search for literature regarding error yielded insight for the need to include concepts rooted in chaos theory and strange attractors---heretofore unconsidered by mainstream researchers who investigated human error in nuclear power plants or those who employed the ecological model in their work. The study of patterns obtained from the rupture of a steam generator tube (SGTR) event simulation, provided a direct application to aspects of control room operations in nuclear power plant operations. In doing so, the conceptual foundation based in the understanding of the patterns of human error analysis can be gleaned, resulting in reduced and prevent undesirable events.

  11. Human Error In Complex Systems

    NASA Technical Reports Server (NTRS)

    Morris, Nancy M.; Rouse, William B.

    1991-01-01

    Report presents results of research aimed at understanding causes of human error in such complex systems as aircraft, nuclear powerplants, and chemical processing plants. Research considered both slips (errors of action) and mistakes (errors of intention), and influence of workload on them. Results indicated that: humans respond to conditions in which errors expected by attempting to reduce incidence of errors; and adaptation to conditions potent influence on human behavior in discretionary situations.

  12. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1981-01-01

    Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  13. Patterns of technical error among surgical malpractice claims: an analysis of strategies to prevent injury to surgical patients.

    PubMed

    Regenbogen, Scott E; Greenberg, Caprice C; Studdert, David M; Lipsitz, Stuart R; Zinner, Michael J; Gawande, Atul A

    2007-11-01

    To identify the most prevalent patterns of technical errors in surgery, and evaluate commonly recommended interventions in light of these patterns. The majority of surgical adverse events involve technical errors, but little is known about the nature and causes of these events. We examined characteristics of technical errors and common contributing factors among closed surgical malpractice claims. Surgeon reviewers analyzed 444 randomly sampled surgical malpractice claims from four liability insurers. Among 258 claims in which injuries due to error were detected, 52% (n = 133) involved technical errors. These technical errors were further analyzed with a structured review instrument designed by qualitative content analysis. Forty-nine percent of the technical errors caused permanent disability; an additional 16% resulted in death. Two-thirds (65%) of the technical errors were linked to manual error, 9% to errors in judgment, and 26% to both manual and judgment error. A minority of technical errors involved advanced procedures requiring special training ("index operations"; 16%), surgeons inexperienced with the task (14%), or poorly supervised residents (9%). The majority involved experienced surgeons (73%), and occurred in routine, rather than index, operations (84%). Patient-related complexities-including emergencies, difficult or unexpected anatomy, and previous surgery-contributed to 61% of technical errors, and technology or systems failures contributed to 21%. Most technical errors occur in routine operations with experienced surgeons, under conditions of increased patient complexity or systems failure. Commonly recommended interventions, including restricting high-complexity operations to experienced surgeons, additional training for inexperienced surgeons, and stricter supervision of trainees, are likely to address only a minority of technical errors. Surgical safety research should instead focus on improving decision-making and performance in routine

  14. Human error in airway facilities.

    DOT National Transportation Integrated Search

    2001-01-01

    This report examines human errors in Airway Facilities (AF) with the intent of preventing these errors from being : passed on to the new Operations Control Centers. To effectively manage errors, they first have to be identified. : Human factors engin...

  15. Reduction of Maintenance Error Through Focused Interventions

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Rosekind, Mark R. (Technical Monitor)

    1997-01-01

    It is well known that a significant proportion of aviation accidents and incidents are tied to human error. In flight operations, research of operational errors has shown that so-called "pilot error" often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the "team" concept for maintenance operations and in tailoring programs to fit the needs of technical operations. Nevertheless, there remains a dual challenge: to develop human factors interventions which are directly supported by reliable human error data, and to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  16. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1980-01-01

    Human error, a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents is investigated. Correction of the sources of human error requires that one attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations is presented. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  17. Stochastic Models of Human Errors

    NASA Technical Reports Server (NTRS)

    Elshamy, Maged; Elliott, Dawn M. (Technical Monitor)

    2002-01-01

    Humans play an important role in the overall reliability of engineering systems. More often accidents and systems failure are traced to human errors. Therefore, in order to have meaningful system risk analysis, the reliability of the human element must be taken into consideration. Describing the human error process by mathematical models is a key to analyzing contributing factors. Therefore, the objective of this research effort is to establish stochastic models substantiated by sound theoretic foundation to address the occurrence of human errors in the processing of the space shuttle.

  18. Understanding human management of automation errors.

    PubMed

    McBride, Sara E; Rogers, Wendy A; Fisk, Arthur D

    2014-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance.

  19. Human error and human factors engineering in health care.

    PubMed

    Welch, D L

    1997-01-01

    Human error is inevitable. It happens in health care systems as it does in all other complex systems, and no measure of attention, training, dedication, or punishment is going to stop it. The discipline of human factors engineering (HFE) has been dealing with the causes and effects of human error since the 1940's. Originally applied to the design of increasingly complex military aircraft cockpits, HFE has since been effectively applied to the problem of human error in such diverse systems as nuclear power plants, NASA spacecraft, the process control industry, and computer software. Today the health care industry is becoming aware of the costs of human error and is turning to HFE for answers. Just as early experimental psychologists went beyond the label of "pilot error" to explain how the design of cockpits led to air crashes, today's HFE specialists are assisting the health care industry in identifying the causes of significant human errors in medicine and developing ways to eliminate or ameliorate them. This series of articles will explore the nature of human error and how HFE can be applied to reduce the likelihood of errors and mitigate their effects.

  20. Understanding human management of automation errors

    PubMed Central

    McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.

    2013-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042

  1. Temporal uncertainty analysis of human errors based on interrelationships among multiple factors: a case of Minuteman III missile accident.

    PubMed

    Rong, Hao; Tian, Jin; Zhao, Tingdi

    2016-01-01

    In traditional approaches of human reliability assessment (HRA), the definition of the error producing conditions (EPCs) and the supporting guidance are such that some of the conditions (especially organizational or managerial conditions) can hardly be included, and thus the analysis is burdened with incomprehensiveness without reflecting the temporal trend of human reliability. A method based on system dynamics (SD), which highlights interrelationships among technical and organizational aspects that may contribute to human errors, is presented to facilitate quantitatively estimating the human error probability (HEP) and its related variables changing over time in a long period. Taking the Minuteman III missile accident in 2008 as a case, the proposed HRA method is applied to assess HEP during missile operations over 50 years by analyzing the interactions among the variables involved in human-related risks; also the critical factors are determined in terms of impact that the variables have on risks in different time periods. It is indicated that both technical and organizational aspects should be focused on to minimize human errors in a long run. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  2. Correction of a Technical Error in the Golf Swing: Error Amplification Versus Direct Instruction.

    PubMed

    Milanese, Chiara; Corte, Stefano; Salvetti, Luca; Cavedon, Valentina; Agostini, Tiziano

    2016-01-01

    Performance errors drive motor learning for many tasks. The authors' aim was to determine which of two strategies, method of amplification of error (MAE) or direct instruction (DI), would be more beneficial for error correction during a full golfing swing with a driver. Thirty-four golfers were randomly assigned to one of three training conditions (MAE, DI, and control). Participants were tested in a practice session in which each golfer performed 7 pretraining trials, 6 training-intervention trials, and 7 posttraining trials; and a retention test after 1 week. An optoeletronic motion capture system was used to measure the kinematic parameters of each golfer's performance. Results showed that MAE is an effective strategy for correcting the technical errors leading to a rapid improvement in performance. These findings could have practical implications for sport psychology and physical education because, while practice is obviously necessary for improving learning, the efficacy of the learning process is essential in enhancing learners' motivation and sport enjoyment.

  3. Prevalence of technical errors and periapical lesions in a sample of endodontically treated teeth: a CBCT analysis.

    PubMed

    Nascimento, Eduarda Helena Leandro; Gaêta-Araujo, Hugo; Andrade, Maria Fernanda Silva; Freitas, Deborah Queiroz

    2018-01-21

    The aims of this study are to identify the most frequent technical errors in endodontically treated teeth and to determine which root canals were most often associated with those errors, as well as to relate endodontic technical errors and the presence of coronal restorations with periapical status by means of cone-beam computed tomography images. Six hundred eighteen endodontically treated teeth (1146 root canals) were evaluated for the quality of their endodontic treatment and for the presence of coronal restorations and periapical lesions. Each root canal was classified according to dental groups, and the endodontic technical errors were recorded. Chi-square's test and descriptive analyses were performed. Six hundred eighty root canals (59.3%) had periapical lesions. Maxillary molars and anterior teeth showed higher prevalence of periapical lesions (p < 0.05). Endodontic treatment quality and coronal restoration were associated with periapical status (p < 0.05). Underfilling was the most frequent technical error in all root canals, except for the second mesiobuccal root canal of maxillary molars and the distobuccal root canal of mandibular molars, which were non-filled in 78.4 and 30% of the cases, respectively. There is a high prevalence of apical radiolucencies, which increased in the presence of poor coronal restorations, endodontic technical errors, and when both conditions were concomitant. Underfilling was the most frequent technical error, followed by non-homogeneous and non-filled canals. Evaluation of endodontic treatment quality that considers every single root canal aims on warning dental practitioners of the prevalence of technical errors that could be avoided with careful treatment planning and execution.

  4. Human errors and measurement uncertainty

    NASA Astrophysics Data System (ADS)

    Kuselman, Ilya; Pennecchi, Francesca

    2015-04-01

    Evaluating the residual risk of human errors in a measurement and testing laboratory, remaining after the error reduction by the laboratory quality system, and quantifying the consequences of this risk for the quality of the measurement/test results are discussed based on expert judgments and Monte Carlo simulations. A procedure for evaluation of the contribution of the residual risk to the measurement uncertainty budget is proposed. Examples are provided using earlier published sets of expert judgments on human errors in pH measurement of groundwater, elemental analysis of geological samples by inductively coupled plasma mass spectrometry, and multi-residue analysis of pesticides in fruits and vegetables. The human error contribution to the measurement uncertainty budget in the examples was not negligible, yet also not dominant. This was assessed as a good risk management result.

  5. Application of objective clinical human reliability analysis (OCHRA) in assessment of technical performance in laparoscopic rectal cancer surgery.

    PubMed

    Foster, J D; Miskovic, D; Allison, A S; Conti, J A; Ockrim, J; Cooper, E J; Hanna, G B; Francis, N K

    2016-06-01

    Laparoscopic rectal resection is technically challenging, with outcomes dependent upon technical performance. No robust objective assessment tool exists for laparoscopic rectal resection surgery. This study aimed to investigate the application of the objective clinical human reliability analysis (OCHRA) technique for assessing technical performance of laparoscopic rectal surgery and explore the validity and reliability of this technique. Laparoscopic rectal cancer resection operations were described in the format of a hierarchical task analysis. Potential technical errors were defined. The OCHRA technique was used to identify technical errors enacted in videos of twenty consecutive laparoscopic rectal cancer resection operations from a single site. The procedural task, spatial location, and circumstances of all identified errors were logged. Clinical validity was assessed through correlation with clinical outcomes; reliability was assessed by test-retest. A total of 335 execution errors identified, with a median 15 per operation. More errors were observed during pelvic tasks compared with abdominal tasks (p < 0.001). Within the pelvis, more errors were observed during dissection on the right side than the left (p = 0.03). Test-retest confirmed reliability (r = 0.97, p < 0.001). A significant correlation was observed between error frequency and mesorectal specimen quality (r s = 0.52, p = 0.02) and with blood loss (r s = 0.609, p = 0.004). OCHRA offers a valid and reliable method for evaluating technical performance of laparoscopic rectal surgery.

  6. Analyzing human errors in flight mission operations

    NASA Technical Reports Server (NTRS)

    Bruno, Kristin J.; Welz, Linda L.; Barnes, G. Michael; Sherif, Josef

    1993-01-01

    A long-term program is in progress at JPL to reduce cost and risk of flight mission operations through a defect prevention/error management program. The main thrust of this program is to create an environment in which the performance of the total system, both the human operator and the computer system, is optimized. To this end, 1580 Incident Surprise Anomaly reports (ISA's) from 1977-1991 were analyzed from the Voyager and Magellan projects. A Pareto analysis revealed that 38 percent of the errors were classified as human errors. A preliminary cluster analysis based on the Magellan human errors (204 ISA's) is presented here. The resulting clusters described the underlying relationships among the ISA's. Initial models of human error in flight mission operations are presented. Next, the Voyager ISA's will be scored and included in the analysis. Eventually, these relationships will be used to derive a theoretically motivated and empirically validated model of human error in flight mission operations. Ultimately, this analysis will be used to make continuous process improvements continuous process improvements to end-user applications and training requirements. This Total Quality Management approach will enable the management and prevention of errors in the future.

  7. Human Error: The Stakes Are Raised.

    ERIC Educational Resources Information Center

    Greenberg, Joel

    1980-01-01

    Mistakes related to the operation of nuclear power plants and other technologically complex systems are discussed. Recommendations are given for decreasing the chance of human error in the operation of nuclear plants. The causes of the Three Mile Island incident are presented in terms of the human error element. (SA)

  8. Information systems and human error in the lab.

    PubMed

    Bissell, Michael G

    2004-01-01

    Health system costs in clinical laboratories are incurred daily due to human error. Indeed, a major impetus for automating clinical laboratories has always been the opportunity it presents to simultaneously reduce cost and improve quality of operations by decreasing human error. But merely automating these processes is not enough. To the extent that introduction of these systems results in operators having less practice in dealing with unexpected events or becoming deskilled in problemsolving, however new kinds of error will likely appear. Clinical laboratories could potentially benefit by integrating findings on human error from modern behavioral science into their operations. Fully understanding human error requires a deep understanding of human information processing and cognition. Predicting and preventing negative consequences requires application of this understanding to laboratory operations. Although the occurrence of a particular error at a particular instant cannot be absolutely prevented, human error rates can be reduced. The following principles are key: an understanding of the process of learning in relation to error; understanding the origin of errors since this knowledge can be used to reduce their occurrence; optimal systems should be forgiving to the operator by absorbing errors, at least for a time; although much is known by industrial psychologists about how to write operating procedures and instructions in ways that reduce the probability of error, this expertise is hardly ever put to use in the laboratory; and a feedback mechanism must be designed into the system that enables the operator to recognize in real time that an error has occurred.

  9. Technical errors in planar bone scanning.

    PubMed

    Naddaf, Sleiman Y; Collier, B David; Elgazzar, Abdelhamid H; Khalil, Magdy M

    2004-09-01

    Optimal technique for planar bone scanning improves image quality, which in turn improves diagnostic efficacy. Because planar bone scanning is one of the most frequently performed nuclear medicine examinations, maintaining high standards for this examination is a daily concern for most nuclear medicine departments. Although some problems such as patient motion are frequently encountered, the degraded images produced by many other deviations from optimal technique are rarely seen in clinical practice and therefore may be difficult to recognize. The objectives of this article are to list optimal techniques for 3-phase and whole-body bone scanning, to describe and illustrate a selection of deviations from these optimal techniques for planar bone scanning, and to explain how to minimize or avoid such technical errors.

  10. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiplemore » causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.« less

  11. Intervention strategies for the management of human error

    NASA Technical Reports Server (NTRS)

    Wiener, Earl L.

    1993-01-01

    This report examines the management of human error in the cockpit. The principles probably apply as well to other applications in the aviation realm (e.g. air traffic control, dispatch, weather, etc.) as well as other high-risk systems outside of aviation (e.g. shipping, high-technology medical procedures, military operations, nuclear power production). Management of human error is distinguished from error prevention. It is a more encompassing term, which includes not only the prevention of error, but also a means of disallowing an error, once made, from adversely affecting system output. Such techniques include: traditional human factors engineering, improvement of feedback and feedforward of information from system to crew, 'error-evident' displays which make erroneous input more obvious to the crew, trapping of errors within a system, goal-sharing between humans and machines (also called 'intent-driven' systems), paperwork management, and behaviorally based approaches, including procedures, standardization, checklist design, training, cockpit resource management, etc. Fifteen guidelines for the design and implementation of intervention strategies are included.

  12. Managing human error in aviation.

    PubMed

    Helmreich, R L

    1997-05-01

    Crew resource management (CRM) programs were developed to address team and leadership aspects of piloting modern airplanes. The goal is to reduce errors through team work. Human factors research and social, cognitive, and organizational psychology are used to develop programs tailored for individual airlines. Flight crews study accident case histories, group dynamics, and human error. Simulators provide pilots with the opportunity to solve complex flight problems. CRM in the simulator is called line-oriented flight training (LOFT). In automated cockpits CRM promotes the idea of automation as a crew member. Cultural aspects of aviation include professional, business, and national culture. The aviation CRM model has been adapted for training surgeons and operating room staff in human factors.

  13. The contributions of human factors on human error in Malaysia aviation maintenance industries

    NASA Astrophysics Data System (ADS)

    Padil, H.; Said, M. N.; Azizan, A.

    2018-05-01

    Aviation maintenance is a multitasking activity in which individuals perform varied tasks under constant pressure to meet deadlines as well as challenging work conditions. These situational characteristics combined with human factors can lead to various types of human related errors. The primary objective of this research is to develop a structural relationship model that incorporates human factors, organizational factors, and their impact on human errors in aviation maintenance. Towards that end, a questionnaire was developed which was administered to Malaysian aviation maintenance professionals. Structural Equation Modelling (SEM) approach was used in this study utilizing AMOS software. Results showed that there were a significant relationship of human factors on human errors and were tested in the model. Human factors had a partial effect on organizational factors while organizational factors had a direct and positive impact on human errors. It was also revealed that organizational factors contributed to human errors when coupled with human factors construct. This study has contributed to the advancement of knowledge on human factors effecting safety and has provided guidelines for improving human factors performance relating to aviation maintenance activities and could be used as a reference for improving safety performance in the Malaysian aviation maintenance companies.

  14. Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS)

    NASA Technical Reports Server (NTRS)

    Alexander, Tiffaney Miller

    2017-01-01

    Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Safety within space exploration ground processing operations, the identification and/or classification of underlying contributors and causes of human error must be identified, in order to manage human error. This research provides a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.

  15. Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS)

    NASA Technical Reports Server (NTRS)

    Alexander, Tiffaney Miller

    2017-01-01

    Research results have shown that more than half of aviation, aerospace and aeronautics mishaps/incidents are attributed to human error. As a part of Safety within space exploration ground processing operations, the identification and/or classification of underlying contributors and causes of human error must be identified, in order to manage human error. This research provides a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.

  16. Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS)

    NASA Technical Reports Server (NTRS)

    Alexander, Tiffaney Miller

    2017-01-01

    Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Quality within space exploration ground processing operations, the identification and or classification of underlying contributors and causes of human error must be identified, in order to manage human error.This presentation will provide a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.

  17. Applying lessons learned to enhance human performance and reduce human error for ISS operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, W.R.

    1998-09-01

    A major component of reliability, safety, and mission success for space missions is ensuring that the humans involved (flight crew, ground crew, mission control, etc.) perform their tasks and functions as required. This includes compliance with training and procedures during normal conditions, and successful compensation when malfunctions or unexpected conditions occur. A very significant issue that affects human performance in space flight is human error. Human errors can invalidate carefully designed equipment and procedures. If certain errors combine with equipment failures or design flaws, mission failure or loss of life can occur. The control of human error during operation ofmore » the International Space Station (ISS) will be critical to the overall success of the program. As experience from Mir operations has shown, human performance plays a vital role in the success or failure of long duration space missions. The Department of Energy`s Idaho National Engineering and Environmental Laboratory (INEEL) is developed a systematic approach to enhance human performance and reduce human errors for ISS operations. This approach is based on the systematic identification and evaluation of lessons learned from past space missions such as Mir to enhance the design and operation of ISS. This paper describes previous INEEL research on human error sponsored by NASA and how it can be applied to enhance human reliability for ISS.« less

  18. Reflections on human error - Matters of life and death

    NASA Technical Reports Server (NTRS)

    Wiener, Earl L.

    1989-01-01

    The last two decades have witnessed a rapid growth in the introduction of automatic devices into aircraft cockpits, and eleswhere in human-machine systems. This was motivated in part by the assumption that when human functioning is replaced by machine functioning, human error is eliminated. Experience to date shows that this is far from true, and that automation does not replace humans, but changes their role in the system, as well as the types and severity of the errors they make. This altered role may lead to fewer, but more critical errors. Intervention strategies to prevent these errors, or ameliorate their consequences include basic human factors engineering of the interface, enhanced warning and alerting systems, and more intelligent interfaces that understand the strategic intent of the crew and can detect and trap inconsistent or erroneous input before it affects the system.

  19. Applying lessons learned to enhance human performance and reduce human error for ISS operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, W.R.

    1999-01-01

    A major component of reliability, safety, and mission success for space missions is ensuring that the humans involved (flight crew, ground crew, mission control, etc.) perform their tasks and functions as required. This includes compliance with training and procedures during normal conditions, and successful compensation when malfunctions or unexpected conditions occur. A very significant issue that affects human performance in space flight is human error. Human errors can invalidate carefully designed equipment and procedures. If certain errors combine with equipment failures or design flaws, mission failure or loss of life can occur. The control of human error during operation ofmore » the International Space Station (ISS) will be critical to the overall success of the program. As experience from Mir operations has shown, human performance plays a vital role in the success or failure of long duration space missions. The Department of Energy{close_quote}s Idaho National Engineering and Environmental Laboratory (INEEL) is developing a systematic approach to enhance human performance and reduce human errors for ISS operations. This approach is based on the systematic identification and evaluation of lessons learned from past space missions such as Mir to enhance the design and operation of ISS. This paper will describe previous INEEL research on human error sponsored by NASA and how it can be applied to enhance human reliability for ISS. {copyright} {ital 1999 American Institute of Physics.}« less

  20. Modeling human response errors in synthetic flight simulator domain

    NASA Technical Reports Server (NTRS)

    Ntuen, Celestine A.

    1992-01-01

    This paper presents a control theoretic approach to modeling human response errors (HRE) in the flight simulation domain. The human pilot is modeled as a supervisor of a highly automated system. The synthesis uses the theory of optimal control pilot modeling for integrating the pilot's observation error and the error due to the simulation model (experimental error). Methods for solving the HRE problem are suggested. Experimental verification of the models will be tested in a flight quality handling simulation.

  1. Human operator response to error-likely situations in complex engineering systems

    NASA Technical Reports Server (NTRS)

    Morris, Nancy M.; Rouse, William B.

    1988-01-01

    The causes of human error in complex systems are examined. First, a conceptual framework is provided in which two broad categories of error are discussed: errors of action, or slips, and errors of intention, or mistakes. Conditions in which slips and mistakes might be expected to occur are identified, based on existing theories of human error. Regarding the role of workload, it is hypothesized that workload may act as a catalyst for error. Two experiments are presented in which humans' response to error-likely situations were examined. Subjects controlled PLANT under a variety of conditions and periodically provided subjective ratings of mental effort. A complex pattern of results was obtained, which was not consistent with predictions. Generally, the results of this research indicate that: (1) humans respond to conditions in which errors might be expected by attempting to reduce the possibility of error, and (2) adaptation to conditions is a potent influence on human behavior in discretionary situations. Subjects' explanations for changes in effort ratings are also explored.

  2. Human error and the search for blame

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1989-01-01

    Human error is a frequent topic in discussions about risks in using computer systems. A rational analysis of human error leads through the consideration of mistakes to standards that designers use to avoid mistakes that lead to known breakdowns. The irrational side, however, is more interesting. It conditions people to think that breakdowns are inherently wrong and that there is ultimately someone who is responsible. This leads to a search for someone to blame which diverts attention from: learning from the mistakes; seeing the limitations of current engineering methodology; and improving the discourse of design.

  3. Syntactical and Punctuation Errors: An Analysis of Technical Writing of University Students Science College, Taif University, KSA

    ERIC Educational Resources Information Center

    Alamin, Abdulamir; Ahmed, Sawsan

    2012-01-01

    Analyzing errors committed by second language learners during their first year of study at the University of Taif, can offer insights and knowledge of the learners' difficulties in acquiring technical English communication. With reference to the errors analyzed, the researcher found that the learners' failure to understand basic English grammar…

  4. Defining the Relationship Between Human Error Classes and Technology Intervention Strategies

    NASA Technical Reports Server (NTRS)

    Wiegmann, Douglas A.; Rantanen, Eas M.

    2003-01-01

    The modus operandi in addressing human error in aviation systems is predominantly that of technological interventions or fixes. Such interventions exhibit considerable variability both in terms of sophistication and application. Some technological interventions address human error directly while others do so only indirectly. Some attempt to eliminate the occurrence of errors altogether whereas others look to reduce the negative consequences of these errors. In any case, technological interventions add to the complexity of the systems and may interact with other system components in unforeseeable ways and often create opportunities for novel human errors. Consequently, there is a need to develop standards for evaluating the potential safety benefit of each of these intervention products so that resources can be effectively invested to produce the biggest benefit to flight safety as well as to mitigate any adverse ramifications. The purpose of this project was to help define the relationship between human error and technological interventions, with the ultimate goal of developing a set of standards for evaluating or measuring the potential benefits of new human error fixes.

  5. Human research ethics committees in technical universities.

    PubMed

    Koepsell, David; Brinkman, Willem-Paul; Pont, Sylvia

    2014-07-01

    Human research ethics has developed in both theory and practice mostly from experiences in medical research. Human participants, however, are used in a much broader range of research than ethics committees oversee, including both basic and applied research at technical universities. Although mandated in the United States, the United Kingdom, Canada, and Australia, non-medical research involving humans need not receive ethics review in much of Europe, Asia, Latin America, and Africa. Our survey of the top 50 technical universities in the world shows that, where not specifically mandated by law, most technical universities do not employ ethics committees to review human studies. As the domains of basic and applied sciences expand, ethics committees are increasingly needed to guide and oversee all such research regardless of legal requirements. We offer as examples, from our experience as an ethics committee in a major European technical university, ways in which such a committee provides needed services and can help ensure more ethical studies involving humans outside the standard medical context. We provide some arguments for creating such committees, and in our supplemental article, we provide specific examples of cases and concerns that may confront technical, engineering, and design research, as well as outline the general framework we have used in creating our committee. © The Author(s) 2014.

  6. The Unfortunate Human Factor: A Selective History of Human Factors for Technical Communicators.

    ERIC Educational Resources Information Center

    Johnson, Robert R.

    1994-01-01

    Reviews moments in the history of human factors that are especially relevant to the field of technical communications. Discusses human factors research that is applicable to technical communications. Focuses on qualitative usability research, minimalism, and human activity interface design. (HB)

  7. Flight Technical Error Analysis of the SATS Higher Volume Operations Simulation and Flight Experiments

    NASA Technical Reports Server (NTRS)

    Williams, Daniel M.; Consiglio, Maria C.; Murdoch, Jennifer L.; Adams, Catherine H.

    2005-01-01

    This paper provides an analysis of Flight Technical Error (FTE) from recent SATS experiments, called the Higher Volume Operations (HVO) Simulation and Flight experiments, which NASA conducted to determine pilot acceptability of the HVO concept for normal operating conditions. Reported are FTE results from simulation and flight experiment data indicating the SATS HVO concept is viable and acceptable to low-time instrument rated pilots when compared with today s system (baseline). Described is the comparative FTE analysis of lateral, vertical, and airspeed deviations from the baseline and SATS HVO experimental flight procedures. Based on FTE analysis, all evaluation subjects, low-time instrument-rated pilots, flew the HVO procedures safely and proficiently in comparison to today s system. In all cases, the results of the flight experiment validated the results of the simulation experiment and confirm the utility of the simulation platform for comparative Human in the Loop (HITL) studies of SATS HVO and Baseline operations.

  8. Inborn Errors of Human JAKs and STATs

    PubMed Central

    Casanova, Jean-Laurent; Holland, Steven M.; Notarangelo, Luigi D.

    2012-01-01

    Inborn errors of the genes encoding two of the four human JAKs (JAK3 and TYK2) and three of the six human STATs (STAT1, STAT3, and STAT5B) have been described. We review the disorders arising from mutations in these five genes, highlighting the way in which the molecular and cellular pathogenesis of these conditions has been clarified by the discovery of inborn errors of cytokines, hormones, and their receptors, including those interacting with JAKs and STATs. The phenotypic similarities between mice and humans lacking individual JAK-STAT components suggest that the functions of JAKs and STATs are largely conserved in mammals. However, a wide array of phenotypic differences has emerged between mice and humans carrying bi-allelic null alleles of JAK3, TYK2, STAT1, or STAT5B. Moreover, the high level of allelic heterogeneity at the human JAK3, STAT1, and STAT3 loci has revealed highly diverse immunological and clinical phenotypes, which had not been anticipated. PMID:22520845

  9. Inborn errors of human JAKs and STATs.

    PubMed

    Casanova, Jean-Laurent; Holland, Steven M; Notarangelo, Luigi D

    2012-04-20

    Inborn errors of the genes encoding two of the four human JAKs (JAK3 and TYK2) and three of the six human STATs (STAT1, STAT3, and STAT5B) have been described. We review the disorders arising from mutations in these five genes, highlighting the way in which the molecular and cellular pathogenesis of these conditions has been clarified by the discovery of inborn errors of cytokines, hormones, and their receptors, including those interacting with JAKs and STATs. The phenotypic similarities between mice and humans lacking individual JAK-STAT components suggest that the functions of JAKs and STATs are largely conserved in mammals. However, a wide array of phenotypic differences has emerged between mice and humans carrying biallelic null alleles of JAK3, TYK2, STAT1, or STAT5B. Moreover, the high degree of allelic heterogeneity at the human JAK3, TYK2, STAT1, and STAT3 loci has revealed highly diverse immunological and clinical phenotypes, which had not been anticipated. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. Normal accidents: human error and medical equipment design.

    PubMed

    Dain, Steven

    2002-01-01

    High-risk systems, which are typical of our technologically complex era, include not just nuclear power plants but also hospitals, anesthesia systems, and the practice of medicine and perfusion. In high-risk systems, no matter how effective safety devices are, some types of accidents are inevitable because the system's complexity leads to multiple and unexpected interactions. It is important for healthcare providers to apply a risk assessment and management process to decisions involving new equipment and procedures or staffing matters in order to minimize the residual risks of latent errors, which are amenable to correction because of the large window of opportunity for their detection. This article provides an introduction to basic risk management and error theory principles and examines ways in which they can be applied to reduce and mitigate the inevitable human errors that accompany high-risk systems. The article also discusses "human factor engineering" (HFE), the process which is used to design equipment/ human interfaces in order to mitigate design errors. The HFE process involves interaction between designers and endusers to produce a series of continuous refinements that are incorporated into the final product. The article also examines common design problems encountered in the operating room that may predispose operators to commit errors resulting in harm to the patient. While recognizing that errors and accidents are unavoidable, organizations that function within a high-risk system must adopt a "safety culture" that anticipates problems and acts aggressively through an anonymous, "blameless" reporting mechanism to resolve them. We must continuously examine and improve the design of equipment and procedures, personnel, supplies and materials, and the environment in which we work to reduce error and minimize its effects. Healthcare providers must take a leading role in the day-to-day management of the "Perioperative System" and be a role model in

  11. Defining the Relationship Between Human Error Classes and Technology Intervention Strategies

    NASA Technical Reports Server (NTRS)

    Wiegmann, Douglas A.; Rantanen, Esa; Crisp, Vicki K. (Technical Monitor)

    2002-01-01

    One of the main factors in all aviation accidents is human error. The NASA Aviation Safety Program (AvSP), therefore, has identified several human-factors safety technologies to address this issue. Some technologies directly address human error either by attempting to reduce the occurrence of errors or by mitigating the negative consequences of errors. However, new technologies and system changes may also introduce new error opportunities or even induce different types of errors. Consequently, a thorough understanding of the relationship between error classes and technology "fixes" is crucial for the evaluation of intervention strategies outlined in the AvSP, so that resources can be effectively directed to maximize the benefit to flight safety. The purpose of the present project, therefore, was to examine the repositories of human factors data to identify the possible relationship between different error class and technology intervention strategies. The first phase of the project, which is summarized here, involved the development of prototype data structures or matrices that map errors onto "fixes" (and vice versa), with the hope of facilitating the development of standards for evaluating safety products. Possible follow-on phases of this project are also discussed. These additional efforts include a thorough and detailed review of the literature to fill in the data matrix and the construction of a complete database and standards checklists.

  12. Structured methods for identifying and correcting potential human errors in aviation operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, W.R.

    1997-10-01

    Human errors have been identified as the source of approximately 60% of the incidents and accidents that occur in commercial aviation. It can be assumed that a very large number of human errors occur in aviation operations, even though in most cases the redundancies and diversities built into the design of aircraft systems prevent the errors from leading to serious consequences. In addition, when it is acknowledged that many system failures have their roots in human errors that occur in the design phase, it becomes apparent that the identification and elimination of potential human errors could significantly decrease the risksmore » of aviation operations. This will become even more critical during the design of advanced automation-based aircraft systems as well as next-generation systems for air traffic management. Structured methods to identify and correct potential human errors in aviation operations have been developed and are currently undergoing testing at the Idaho National Engineering and Environmental Laboratory (INEEL).« less

  13. Prediction of human errors by maladaptive changes in event-related brain networks.

    PubMed

    Eichele, Tom; Debener, Stefan; Calhoun, Vince D; Specht, Karsten; Engel, Andreas K; Hugdahl, Kenneth; von Cramon, D Yves; Ullsperger, Markus

    2008-04-22

    Humans engaged in monotonous tasks are susceptible to occasional errors that may lead to serious consequences, but little is known about brain activity patterns preceding errors. Using functional MRI and applying independent component analysis followed by deconvolution of hemodynamic responses, we studied error preceding brain activity on a trial-by-trial basis. We found a set of brain regions in which the temporal evolution of activation predicted performance errors. These maladaptive brain activity changes started to evolve approximately 30 sec before the error. In particular, a coincident decrease of deactivation in default mode regions of the brain, together with a decline of activation in regions associated with maintaining task effort, raised the probability of future errors. Our findings provide insights into the brain network dynamics preceding human performance errors and suggest that monitoring of the identified precursor states may help in avoiding human errors in critical real-world situations.

  14. Prediction of human errors by maladaptive changes in event-related brain networks

    PubMed Central

    Eichele, Tom; Debener, Stefan; Calhoun, Vince D.; Specht, Karsten; Engel, Andreas K.; Hugdahl, Kenneth; von Cramon, D. Yves; Ullsperger, Markus

    2008-01-01

    Humans engaged in monotonous tasks are susceptible to occasional errors that may lead to serious consequences, but little is known about brain activity patterns preceding errors. Using functional MRI and applying independent component analysis followed by deconvolution of hemodynamic responses, we studied error preceding brain activity on a trial-by-trial basis. We found a set of brain regions in which the temporal evolution of activation predicted performance errors. These maladaptive brain activity changes started to evolve ≈30 sec before the error. In particular, a coincident decrease of deactivation in default mode regions of the brain, together with a decline of activation in regions associated with maintaining task effort, raised the probability of future errors. Our findings provide insights into the brain network dynamics preceding human performance errors and suggest that monitoring of the identified precursor states may help in avoiding human errors in critical real-world situations. PMID:18427123

  15. Applications of integrated human error identification techniques on the chemical cylinder change task.

    PubMed

    Cheng, Ching-Min; Hwang, Sheue-Ling

    2015-03-01

    This paper outlines the human error identification (HEI) techniques that currently exist to assess latent human errors. Many formal error identification techniques have existed for years, but few have been validated to cover latent human error analysis in different domains. This study considers many possible error modes and influential factors, including external error modes, internal error modes, psychological error mechanisms, and performance shaping factors, and integrates several execution procedures and frameworks of HEI techniques. The case study in this research was the operational process of changing chemical cylinders in a factory. In addition, the integrated HEI method was used to assess the operational processes and the system's reliability. It was concluded that the integrated method is a valuable aid to develop much safer operational processes and can be used to predict human error rates on critical tasks in the plant. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  16. Human error in aviation operations

    NASA Technical Reports Server (NTRS)

    Billings, C. E.; Lanber, J. K.; Cooper, G. E.

    1974-01-01

    This report is a brief description of research being undertaken by the National Aeronautics and Space Administration. The project is designed to seek out factors in the aviation system which contribute to human error, and to search for ways of minimizing the potential threat posed by these factors. The philosophy and assumptions underlying the study are discussed, together with an outline of the research plan.

  17. Using a Delphi Method to Identify Human Factors Contributing to Nursing Errors.

    PubMed

    Roth, Cheryl; Brewer, Melanie; Wieck, K Lynn

    2017-07-01

    The purpose of this study was to identify human factors associated with nursing errors. Using a Delphi technique, this study used feedback from a panel of nurse experts (n = 25) on an initial qualitative survey questionnaire followed by summarizing the results with feedback and confirmation. Synthesized factors regarding causes of errors were incorporated into a quantitative Likert-type scale, and the original expert panel participants were queried a second time to validate responses. The list identified 24 items as most common causes of nursing errors, including swamping and errors made by others that nurses are expected to recognize and fix. The responses provided a consensus top 10 errors list based on means with heavy workload and fatigue at the top of the list. The use of the Delphi survey established consensus and developed a platform upon which future study of nursing errors can evolve as a link to future solutions. This list of human factors in nursing errors should serve to stimulate dialogue among nurses about how to prevent errors and improve outcomes. Human and system failures have been the subject of an abundance of research, yet nursing errors continue to occur. © 2016 Wiley Periodicals, Inc.

  18. Human Error and the International Space Station: Challenges and Triumphs in Science Operations

    NASA Technical Reports Server (NTRS)

    Harris, Samantha S.; Simpson, Beau C.

    2016-01-01

    Any system with a human component is inherently risky. Studies in human factors and psychology have repeatedly shown that human operators will inevitably make errors, regardless of how well they are trained. Onboard the International Space Station (ISS) where crew time is arguably the most valuable resource, errors by the crew or ground operators can be costly to critical science objectives. Operations experts at the ISS Payload Operations Integration Center (POIC), located at NASA's Marshall Space Flight Center in Huntsville, Alabama, have learned that from payload concept development through execution, there are countless opportunities to introduce errors that can potentially result in costly losses of crew time and science. To effectively address this challenge, we must approach the design, testing, and operation processes with two specific goals in mind. First, a systematic approach to error and human centered design methodology should be implemented to minimize opportunities for user error. Second, we must assume that human errors will be made and enable rapid identification and recoverability when they occur. While a systematic approach and human centered development process can go a long way toward eliminating error, the complete exclusion of operator error is not a reasonable expectation. The ISS environment in particular poses challenging conditions, especially for flight controllers and astronauts. Operating a scientific laboratory 250 miles above the Earth is a complicated and dangerous task with high stakes and a steep learning curve. While human error is a reality that may never be fully eliminated, smart implementation of carefully chosen tools and techniques can go a long way toward minimizing risk and increasing the efficiency of NASA's space science operations.

  19. An improved estimator for the hydration of fat-free mass from in vivo measurements subject to additive technical errors.

    PubMed

    Kinnamon, Daniel D; Lipsitz, Stuart R; Ludwig, David A; Lipshultz, Steven E; Miller, Tracie L

    2010-04-01

    The hydration of fat-free mass, or hydration fraction (HF), is often defined as a constant body composition parameter in a two-compartment model and then estimated from in vivo measurements. We showed that the widely used estimator for the HF parameter in this model, the mean of the ratios of measured total body water (TBW) to fat-free mass (FFM) in individual subjects, can be inaccurate in the presence of additive technical errors. We then proposed a new instrumental variables estimator that accurately estimates the HF parameter in the presence of such errors. In Monte Carlo simulations, the mean of the ratios of TBW to FFM was an inaccurate estimator of the HF parameter, and inferences based on it had actual type I error rates more than 13 times the nominal 0.05 level under certain conditions. The instrumental variables estimator was accurate and maintained an actual type I error rate close to the nominal level in all simulations. When estimating and performing inference on the HF parameter, the proposed instrumental variables estimator should yield accurate estimates and correct inferences in the presence of additive technical errors, but the mean of the ratios of TBW to FFM in individual subjects may not.

  20. Behind Human Error: Cognitive Systems, Computers and Hindsight

    DTIC Science & Technology

    1994-12-01

    evaluations • Organize and/or conduct workshops and conferences CSERIAC is a Department of Defense Information Analysis Cen- ter sponsored by the Defense...Process 185 Neutral Observer Criteria 191 Error Analysis as Causal Judgment 193 Error as Information 195 A Fundamental Surprise 195 What is Human...Kahnemann, 1974), and in risk analysis (Dougherty and Fragola, 1990). The discussions have continued in a wide variety of forums, includ- ing the

  1. Analysis of measured data of human body based on error correcting frequency

    NASA Astrophysics Data System (ADS)

    Jin, Aiyan; Peipei, Gao; Shang, Xiaomei

    2014-04-01

    Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.

  2. A Quality Improvement Project to Decrease Human Milk Errors in the NICU.

    PubMed

    Oza-Frank, Reena; Kachoria, Rashmi; Dail, James; Green, Jasmine; Walls, Krista; McClead, Richard E

    2017-02-01

    Ensuring safe human milk in the NICU is a complex process with many potential points for error, of which one of the most serious is administration of the wrong milk to the wrong infant. Our objective was to describe a quality improvement initiative that was associated with a reduction in human milk administration errors identified over a 6-year period in a typical, large NICU setting. We employed a quasi-experimental time series quality improvement initiative by using tools from the model for improvement, Six Sigma methodology, and evidence-based interventions. Scanned errors were identified from the human milk barcode medication administration system. Scanned errors of interest were wrong-milk-to-wrong-infant, expired-milk, or preparation errors. The scanned error rate and the impact of additional improvement interventions from 2009 to 2015 were monitored by using statistical process control charts. From 2009 to 2015, the total number of errors scanned declined from 97.1 per 1000 bottles to 10.8. Specifically, the number of expired milk error scans declined from 84.0 per 1000 bottles to 8.9. The number of preparation errors (4.8 per 1000 bottles to 2.2) and wrong-milk-to-wrong-infant errors scanned (8.3 per 1000 bottles to 2.0) also declined. By reducing the number of errors scanned, the number of opportunities for errors also decreased. Interventions that likely had the greatest impact on reducing the number of scanned errors included installation of bedside (versus centralized) scanners and dedicated staff to handle milk. Copyright © 2017 by the American Academy of Pediatrics.

  3. Human error mitigation initiative (HEMI) : summary report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, Susan M.; Ramos, M. Victoria; Wenner, Caren A.

    2004-11-01

    Despite continuing efforts to apply existing hazard analysis methods and comply with requirements, human errors persist across the nuclear weapons complex. Due to a number of factors, current retroactive and proactive methods to understand and minimize human error are highly subjective, inconsistent in numerous dimensions, and are cumbersome to characterize as thorough. An alternative and proposed method begins with leveraging historical data to understand what the systemic issues are and where resources need to be brought to bear proactively to minimize the risk of future occurrences. An illustrative analysis was performed using existing incident databases specific to Pantex weapons operationsmore » indicating systemic issues associated with operating procedures that undergo notably less development rigor relative to other task elements such as tooling and process flow. Future recommended steps to improve the objectivity, consistency, and thoroughness of hazard analysis and mitigation were delineated.« less

  4. Latent human error analysis and efficient improvement strategies by fuzzy TOPSIS in aviation maintenance tasks.

    PubMed

    Chiu, Ming-Chuan; Hsieh, Min-Chih

    2016-05-01

    The purposes of this study were to develop a latent human error analysis process, to explore the factors of latent human error in aviation maintenance tasks, and to provide an efficient improvement strategy for addressing those errors. First, we used HFACS and RCA to define the error factors related to aviation maintenance tasks. Fuzzy TOPSIS with four criteria was applied to evaluate the error factors. Results show that 1) adverse physiological states, 2) physical/mental limitations, and 3) coordination, communication, and planning are the factors related to airline maintenance tasks that could be addressed easily and efficiently. This research establishes a new analytic process for investigating latent human error and provides a strategy for analyzing human error using fuzzy TOPSIS. Our analysis process complements shortages in existing methodologies by incorporating improvement efficiency, and it enhances the depth and broadness of human error analysis methodology. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  5. Error Analysis in Mathematics. Technical Report #1012

    ERIC Educational Resources Information Center

    Lai, Cheng-Fei

    2012-01-01

    Error analysis is a method commonly used to identify the cause of student errors when they make consistent mistakes. It is a process of reviewing a student's work and then looking for patterns of misunderstanding. Errors in mathematics can be factual, procedural, or conceptual, and may occur for a number of reasons. Reasons why students make…

  6. Using APEX to Model Anticipated Human Error: Analysis of a GPS Navigational Aid

    NASA Technical Reports Server (NTRS)

    VanSelst, Mark; Freed, Michael; Shefto, Michael (Technical Monitor)

    1997-01-01

    The interface development process can be dramatically improved by predicting design facilitated human error at an early stage in the design process. The approach we advocate is to SIMULATE the behavior of a human agent carrying out tasks with a well-specified user interface, ANALYZE the simulation for instances of human error, and then REFINE the interface or protocol to minimize predicted error. This approach, incorporated into the APEX modeling architecture, differs from past approaches to human simulation in Its emphasis on error rather than e.g. learning rate or speed of response. The APEX model consists of two major components: (1) a powerful action selection component capable of simulating behavior in complex, multiple-task environments; and (2) a resource architecture which constrains cognitive, perceptual, and motor capabilities to within empirically demonstrated limits. The model mimics human errors arising from interactions between limited human resources and elements of the computer interface whose design falls to anticipate those limits. We analyze the design of a hand-held Global Positioning System (GPS) device used for radical and navigational decisions in small yacht recalls. The analysis demonstrates how human system modeling can be an effective design aid, helping to accelerate the process of refining a product (or procedure).

  7. Human error analysis of commercial aviation accidents: application of the Human Factors Analysis and Classification system (HFACS).

    PubMed

    Wiegmann, D A; Shappell, S A

    2001-11-01

    The Human Factors Analysis and Classification System (HFACS) is a general human error framework originally developed and tested within the U.S. military as a tool for investigating and analyzing the human causes of aviation accidents. Based on Reason's (1990) model of latent and active failures, HFACS addresses human error at all levels of the system, including the condition of aircrew and organizational factors. The purpose of the present study was to assess the utility of the HFACS framework as an error analysis and classification tool outside the military. The HFACS framework was used to analyze human error data associated with aircrew-related commercial aviation accidents that occurred between January 1990 and December 1996 using database records maintained by the NTSB and the FAA. Investigators were able to reliably accommodate all the human causal factors associated with the commercial aviation accidents examined in this study using the HFACS system. In addition, the classification of data using HFACS highlighted several critical safety issues in need of intervention research. These results demonstrate that the HFACS framework can be a viable tool for use within the civil aviation arena. However, additional research is needed to examine its applicability to areas outside the flight deck, such as aircraft maintenance and air traffic control domains.

  8. Prediction error induced motor contagions in human behaviors.

    PubMed

    Ikegami, Tsuyoshi; Ganesh, Gowrishankar; Takeuchi, Tatsuya; Nakamoto, Hiroki

    2018-05-29

    Motor contagions refer to implicit effects on one's actions induced by observed actions. Motor contagions are believed to be induced simply by action observation and cause an observer's action to become similar to the action observed. In contrast, here we report a new motor contagion that is induced only when the observation is accompanied by prediction errors - differences between actions one observes and those he/she predicts or expects. In two experiments, one on whole-body baseball pitching and another on simple arm reaching, we show that the observation of the same action induces distinct motor contagions, depending on whether prediction errors are present or not. In the absence of prediction errors, as in previous reports, participants' actions changed to become similar to the observed action, while in the presence of prediction errors, their actions changed to diverge away from it, suggesting distinct effects of action observation and action prediction on human actions. © 2018, Ikegami et al.

  9. Human error in hospitals and industrial accidents: current concepts.

    PubMed

    Spencer, F C

    2000-10-01

    Most data concerning errors and accidents are from industrial accidents and airline injuries. General Electric, Alcoa, and Motorola, among others, all have reported complex programs that resulted in a marked reduction in frequency of worker injuries. In the field of medicine, however, with the outstanding exception of anesthesiology, there is a paucity of information, most reports referring to the 1984 Harvard-New York State Study, more than 16 years ago. This scarcity of information indicates the complexity of the problem. It seems very unlikely that simple exhortation or additional regulations will help because the problem lies principally in the multiple human-machine interfaces that constitute modern medical care. The absence of success stories also indicates that the best methods have to be learned by experience. A liaison with industry should be helpful, although the varieties of human illness are far different from a standardized manufacturing process. Concurrent with the studies of industrial and nuclear accidents, cognitive psychologists have intensively studied how the brain stores and retrieves information. Several concepts have emerged. First, errors are not character defects to be treated by the classic approach of discipline and education, but are byproducts of normal thinking that occur frequently. Second, major accidents are rarely causedby a single error; instead, they are often a combination of chronic system errors, termed latent errors. Identifying and correcting these latent errors should be the principal focus for corrective planning rather than searching for an individual culprit. This nonpunitive concept of errors is a key basis for an effective reporting system, brilliantly demonstrated in aviation with the ASRS system developed more than 25 years ago. The ASRS currently receives more than 30,000 reports annually and is credited with the remarkable increase in safety of airplane travel. Adverse drug events constitute about 25% of hospital

  10. Understanding Teamwork in Trauma Resuscitation through Analysis of Team Errors

    ERIC Educational Resources Information Center

    Sarcevic, Aleksandra

    2009-01-01

    An analysis of human errors in complex work settings can lead to important insights into the workspace design. This type of analysis is particularly relevant to safety-critical, socio-technical systems that are highly dynamic, stressful and time-constrained, and where failures can result in catastrophic societal, economic or environmental…

  11. Human error analysis of commercial aviation accidents using the human factors analysis and classification system (HFACS)

    DOT National Transportation Integrated Search

    2001-02-01

    The Human Factors Analysis and Classification System (HFACS) is a general human error framework : originally developed and tested within the U.S. military as a tool for investigating and analyzing the human : causes of aviation accidents. Based upon ...

  12. Systematic analysis of video data from different human–robot interaction studies: a categorization of social signals during error situations

    PubMed Central

    Giuliani, Manuel; Mirnig, Nicole; Stollnberger, Gerald; Stadler, Susanne; Buchner, Roland; Tscheligi, Manfred

    2015-01-01

    Human–robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human–robot interaction experiments. For that, we analyzed 201 videos of five human–robot interaction user studies with varying tasks from four independent projects. The analysis shows that there are two types of error situations: social norm violations and technical failures. Social norm violations are situations in which the robot does not adhere to the underlying social script of the interaction. Technical failures are caused by technical shortcomings of the robot. The results of the video analysis show that the study participants use many head movements and very few gestures, but they often smile, when in an error situation with the robot. Another result is that the participants sometimes stop moving at the beginning of error situations. We also found that the participants talked more in the case of social norm violations and less during technical failures. Finally, the participants use fewer non-verbal social signals (for example smiling, nodding, and head shaking), when they are interacting with the robot alone and no experimenter or other human is present. The results suggest that participants do not see the robot as a social interaction partner with comparable communication skills. Our findings have implications for builders and evaluators of human–robot interaction systems. The builders need to consider including modules for recognition and classification of head movements to the robot input channels. The evaluators need to make sure that the presence of an experimenter does not skew the results of their user studies. PMID:26217266

  13. Exploring human error in military aviation flight safety events using post-incident classification systems.

    PubMed

    Hooper, Brionny J; O'Hare, David P A

    2013-08-01

    Human error classification systems theoretically allow researchers to analyze postaccident data in an objective and consistent manner. The Human Factors Analysis and Classification System (HFACS) framework is one such practical analysis tool that has been widely used to classify human error in aviation. The Cognitive Error Taxonomy (CET) is another. It has been postulated that the focus on interrelationships within HFACS can facilitate the identification of the underlying causes of pilot error. The CET provides increased granularity at the level of unsafe acts. The aim was to analyze the influence of factors at higher organizational levels on the unsafe acts of front-line operators and to compare the errors of fixed-wing and rotary-wing operations. This study analyzed 288 aircraft incidents involving human error from an Australasian military organization occurring between 2001 and 2008. Action errors accounted for almost twice (44%) the proportion of rotary wing compared to fixed wing (23%) incidents. Both classificatory systems showed significant relationships between precursor factors such as the physical environment, mental and physiological states, crew resource management, training and personal readiness, and skill-based, but not decision-based, acts. The CET analysis showed different predisposing factors for different aspects of skill-based behaviors. Skill-based errors in military operations are more prevalent in rotary wing incidents and are related to higher level supervisory processes in the organization. The Cognitive Error Taxonomy provides increased granularity to HFACS analyses of unsafe acts.

  14. A Conceptual Framework for Predicting Error in Complex Human-Machine Environments

    NASA Technical Reports Server (NTRS)

    Freed, Michael; Remington, Roger; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    We present a Goals, Operators, Methods, and Selection Rules-Model Human Processor (GOMS-MHP) style model-based approach to the problem of predicting human habit capture errors. Habit captures occur when the model fails to allocate limited cognitive resources to retrieve task-relevant information from memory. Lacking the unretrieved information, decision mechanisms act in accordance with implicit default assumptions, resulting in error when relied upon assumptions prove incorrect. The model helps interface designers identify situations in which such failures are especially likely.

  15. Modeling human tracking error in several different anti-tank systems

    NASA Technical Reports Server (NTRS)

    Kleinman, D. L.

    1981-01-01

    An optimal control model for generating time histories of human tracking errors in antitank systems is outlined. Monte Carlo simulations of human operator responses for three Army antitank systems are compared. System/manipulator dependent data comparisons reflecting human operator limitations in perceiving displayed quantities and executing intended control motions are presented. Motor noise parameters are also discussed.

  16. Human Error Analysis in a Permit to Work System: A Case Study in a Chemical Plant

    PubMed Central

    Jahangiri, Mehdi; Hoboubi, Naser; Rostamabadi, Akbar; Keshavarzi, Sareh; Hosseini, Ali Akbar

    2015-01-01

    Background A permit to work (PTW) is a formal written system to control certain types of work which are identified as potentially hazardous. However, human error in PTW processes can lead to an accident. Methods This cross-sectional, descriptive study was conducted to estimate the probability of human errors in PTW processes in a chemical plant in Iran. In the first stage, through interviewing the personnel and studying the procedure in the plant, the PTW process was analyzed using the hierarchical task analysis technique. In doing so, PTW was considered as a goal and detailed tasks to achieve the goal were analyzed. In the next step, the standardized plant analysis risk-human (SPAR-H) reliability analysis method was applied for estimation of human error probability. Results The mean probability of human error in the PTW system was estimated to be 0.11. The highest probability of human error in the PTW process was related to flammable gas testing (50.7%). Conclusion The SPAR-H method applied in this study could analyze and quantify the potential human errors and extract the required measures for reducing the error probabilities in PTW system. Some suggestions to reduce the likelihood of errors, especially in the field of modifying the performance shaping factors and dependencies among tasks are provided. PMID:27014485

  17. Use of modeling to identify vulnerabilities to human error in laparoscopy.

    PubMed

    Funk, Kenneth H; Bauer, James D; Doolen, Toni L; Telasha, David; Nicolalde, R Javier; Reeber, Miriam; Yodpijit, Nantakrit; Long, Myra

    2010-01-01

    This article describes an exercise to investigate the utility of modeling and human factors analysis in understanding surgical processes and their vulnerabilities to medical error. A formal method to identify error vulnerabilities was developed and applied to a test case of Veress needle insertion during closed laparoscopy. A team of 2 surgeons, a medical assistant, and 3 engineers used hierarchical task analysis and Integrated DEFinition language 0 (IDEF0) modeling to create rich models of the processes used in initial port creation. Using terminology from a standardized human performance database, detailed task descriptions were written for 4 tasks executed in the process of inserting the Veress needle. Key terms from the descriptions were used to extract from the database generic errors that could occur. Task descriptions with potential errors were translated back into surgical terminology. Referring to the process models and task descriptions, the team used a modified failure modes and effects analysis (FMEA) to consider each potential error for its probability of occurrence, its consequences if it should occur and be undetected, and its probability of detection. The resulting likely and consequential errors were prioritized for intervention. A literature-based validation study confirmed the significance of the top error vulnerabilities identified using the method. Ongoing work includes design and evaluation of procedures to correct the identified vulnerabilities and improvements to the modeling and vulnerability identification methods. Copyright 2010 AAGL. Published by Elsevier Inc. All rights reserved.

  18. THERP and HEART integrated methodology for human error assessment

    NASA Astrophysics Data System (ADS)

    Castiglia, Francesco; Giardina, Mariarosa; Tomarchio, Elio

    2015-11-01

    THERP and HEART integrated methodology is proposed to investigate accident scenarios that involve operator errors during high-dose-rate (HDR) treatments. The new approach has been modified on the basis of fuzzy set concept with the aim of prioritizing an exhaustive list of erroneous tasks that can lead to patient radiological overexposures. The results allow for the identification of human errors that are necessary to achieve a better understanding of health hazards in the radiotherapy treatment process, so that it can be properly monitored and appropriately managed.

  19. Accounting for measurement error: a critical but often overlooked process.

    PubMed

    Harris, Edward F; Smith, Richard N

    2009-12-01

    Due to instrument imprecision and human inconsistencies, measurements are not free of error. Technical error of measurement (TEM) is the variability encountered between dimensions when the same specimens are measured at multiple sessions. A goal of a data collection regimen is to minimise TEM. The few studies that actually quantify TEM, regardless of discipline, report that it is substantial and can affect results and inferences. This paper reviews some statistical approaches for identifying and controlling TEM. Statistically, TEM is part of the residual ('unexplained') variance in a statistical test, so accounting for TEM, which requires repeated measurements, enhances the chances of finding a statistically significant difference if one exists. The aim of this paper was to review and discuss common statistical designs relating to types of error and statistical approaches to error accountability. This paper addresses issues of landmark location, validity, technical and systematic error, analysis of variance, scaled measures and correlation coefficients in order to guide the reader towards correct identification of true experimental differences. Researchers commonly infer characteristics about populations from comparatively restricted study samples. Most inferences are statistical and, aside from concerns about adequate accounting for known sources of variation with the research design, an important source of variability is measurement error. Variability in locating landmarks that define variables is obvious in odontometrics, cephalometrics and anthropometry, but the same concerns about measurement accuracy and precision extend to all disciplines. With increasing accessibility to computer-assisted methods of data collection, the ease of incorporating repeated measures into statistical designs has improved. Accounting for this technical source of variation increases the chance of finding biologically true differences when they exist.

  20. Human error identification for laparoscopic surgery: Development of a motion economy perspective.

    PubMed

    Al-Hakim, Latif; Sevdalis, Nick; Maiping, Tanaphon; Watanachote, Damrongpan; Sengupta, Shomik; Dissaranan, Charuspong

    2015-09-01

    This study postulates that traditional human error identification techniques fail to consider motion economy principles and, accordingly, their applicability in operating theatres may be limited. This study addresses this gap in the literature with a dual aim. First, it identifies the principles of motion economy that suit the operative environment and second, it develops a new error mode taxonomy for human error identification techniques which recognises motion economy deficiencies affecting the performance of surgeons and predisposing them to errors. A total of 30 principles of motion economy were developed and categorised into five areas. A hierarchical task analysis was used to break down main tasks of a urological laparoscopic surgery (hand-assisted laparoscopic nephrectomy) to their elements and the new taxonomy was used to identify errors and their root causes resulting from violation of motion economy principles. The approach was prospectively tested in 12 observed laparoscopic surgeries performed by 5 experienced surgeons. A total of 86 errors were identified and linked to the motion economy deficiencies. Results indicate the developed methodology is promising. Our methodology allows error prevention in surgery and the developed set of motion economy principles could be useful for training surgeons on motion economy principles. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  1. The Humanities in the Two-Year Agricultural Technical College.

    ERIC Educational Resources Information Center

    Nelson, Ronald J.

    Drawing from the experiences of the University of Minnesota Technical College, Waseca (UMW), this paper provides a rationale and suggestions for promoting and integrating the humanities in vocational education. After discussions of the problems of interesting occupational students in humanities courses and the value of humanities instruction in…

  2. A Bayesian method for using simulator data to enhance human error probabilities assigned by existing HRA methods

    DOE PAGES

    Groth, Katrina M.; Smith, Curtis L.; Swiler, Laura P.

    2014-04-05

    In the past several years, several international agencies have begun to collect data on human performance in nuclear power plant simulators [1]. This data provides a valuable opportunity to improve human reliability analysis (HRA), but there improvements will not be realized without implementation of Bayesian methods. Bayesian methods are widely used in to incorporate sparse data into models in many parts of probabilistic risk assessment (PRA), but Bayesian methods have not been adopted by the HRA community. In this article, we provide a Bayesian methodology to formally use simulator data to refine the human error probabilities (HEPs) assigned by existingmore » HRA methods. We demonstrate the methodology with a case study, wherein we use simulator data from the Halden Reactor Project to update the probability assignments from the SPAR-H method. The case study demonstrates the ability to use performance data, even sparse data, to improve existing HRA methods. Furthermore, this paper also serves as a demonstration of the value of Bayesian methods to improve the technical basis of HRA.« less

  3. On the isobaric space of 25-hydroxyvitamin D in human serum: potential for interferences in liquid chromatography/tandem mass spectrometry, systematic errors and accuracy issues.

    PubMed

    Qi, Yulin; Geib, Timon; Schorr, Pascal; Meier, Florian; Volmer, Dietrich A

    2015-01-15

    Isobaric interferences in human serum can potentially influence the measured concentration levels of 25-hydroxyvitamin D [25(OH)D], when low resolving power liquid chromatography/tandem mass spectrometry (LC/MS/MS) instruments and non-specific MS/MS product ions are employed for analysis. In this study, we provide a detailed characterization of these interferences and a technical solution to reduce the associated systematic errors. Detailed electrospray ionization Fourier transform ion cyclotron resonance (FTICR) high-resolution mass spectrometry (HRMS) experiments were used to characterize co-extracted isobaric components of 25(OH)D from human serum. Differential ion mobility spectrometry (DMS), as a gas-phase ion filter, was implemented on a triple quadrupole mass spectrometer for separation of the isobars. HRMS revealed the presence of multiple isobaric compounds in extracts of human serum for different sample preparation methods. Several of these isobars had the potential to increase the peak areas measured for 25(OH)D on low-resolution MS instruments. A major isobaric component was identified as pentaerythritol oleate, a technical lubricant, which was probably an artifact from the analytical instrumentation. DMS was able to remove several of these isobars prior to MS/MS, when implemented on the low-resolution triple quadrupole mass spectrometer. It was shown in this proof-of-concept study that DMS-MS has the potential to significantly decrease systematic errors, and thus improve accuracy of vitamin D measurements using LC/MS/MS. Copyright © 2014 John Wiley & Sons, Ltd.

  4. Human errors and violations in computer and information security: the viewpoint of network administrators and security specialists.

    PubMed

    Kraemer, Sara; Carayon, Pascale

    2007-03-01

    This paper describes human errors and violations of end users and network administration in computer and information security. This information is summarized in a conceptual framework for examining the human and organizational factors contributing to computer and information security. This framework includes human error taxonomies to describe the work conditions that contribute adversely to computer and information security, i.e. to security vulnerabilities and breaches. The issue of human error and violation in computer and information security was explored through a series of 16 interviews with network administrators and security specialists. The interviews were audio taped, transcribed, and analyzed by coding specific themes in a node structure. The result is an expanded framework that classifies types of human error and identifies specific human and organizational factors that contribute to computer and information security. Network administrators tended to view errors created by end users as more intentional than unintentional, while errors created by network administrators as more unintentional than intentional. Organizational factors, such as communication, security culture, policy, and organizational structure, were the most frequently cited factors associated with computer and information security.

  5. Human Error as an Emergent Property of Action Selection and Task Place-Holding.

    PubMed

    Tamborello, Franklin P; Trafton, J Gregory

    2017-05-01

    A computational process model could explain how the dynamic interaction of human cognitive mechanisms produces each of multiple error types. With increasing capability and complexity of technological systems, the potential severity of consequences of human error is magnified. Interruption greatly increases people's error rates, as does the presence of other information to maintain in an active state. The model executed as a software-instantiated Monte Carlo simulation. It drew on theoretical constructs such as associative spreading activation for prospective memory, explicit rehearsal strategies as a deliberate cognitive operation to aid retrospective memory, and decay. The model replicated the 30% effect of interruptions on postcompletion error in Ratwani and Trafton's Stock Trader task, the 45% interaction effect on postcompletion error of working memory capacity and working memory load from Byrne and Bovair's Phaser Task, as well as the 5% perseveration and 3% omission effects of interruption from the UNRAVEL Task. Error classes including perseveration, omission, and postcompletion error fall naturally out of the theory. The model explains post-interruption error in terms of task state representation and priming for recall of subsequent steps. Its performance suggests that task environments providing more cues to current task state will mitigate error caused by interruption. For example, interfaces could provide labeled progress indicators or facilities for operators to quickly write notes about their task states when interrupted.

  6. Annotated Bibliography of the Air Force Human Resources Laboratory Technical Reports - 1979.

    DTIC Science & Technology

    1981-05-01

    Force Human Resources Laboratory, March 1980. (Covers all AFHRL projects.) NTIS. This document provides the academic and industrial R&D community with...D-AI02 04𔃾 AIR FORCE HUMAN RESOURCES LAB BROOKS AF TX F/G 5/2 ANNOTATED BIBLIOGRAPHY OF THE AIR FORCE HUMAN RESOURCES LABORAT--ETC(U) MAY 81 E M...OF THE AIR FORCE HUMAN RESOURCES LABORATORY TECHNICAL REPORTS - 1979U M By M Esther M. Barlow A N TECHNICAL SERVICES DIVISION Brooks Air Force Base

  7. Safety coaches in radiology: decreasing human error and minimizing patient harm.

    PubMed

    Dickerson, Julie M; Koch, Bernadette L; Adams, Janet M; Goodfriend, Martha A; Donnelly, Lane F

    2010-09-01

    Successful programs to improve patient safety require a component aimed at improving safety culture and environment, resulting in a reduced number of human errors that could lead to patient harm. Safety coaching provides peer accountability. It involves observing for safety behaviors and use of error prevention techniques and provides immediate feedback. For more than a decade, behavior-based safety coaching has been a successful strategy for reducing error within the context of occupational safety in industry. We describe the use of safety coaches in radiology. Safety coaches are an important component of our comprehensive patient safety program.

  8. Relationship between intraoperative non-technical performance and technical events in bariatric surgery.

    PubMed

    Fecso, A B; Kuzulugil, S S; Babaoglu, C; Bener, A B; Grantcharov, T P

    2018-03-30

    The operating theatre is a unique environment with complex team interactions, where technical and non-technical performance affect patient outcomes. The correlation between technical and non-technical performance, however, remains underinvestigated. The purpose of this study was to explore these interactions in the operating theatre. A prospective single-centre observational study was conducted at a tertiary academic medical centre. One surgeon and three fellows participated as main operators. All patients who underwent a laparoscopic Roux-en-Y gastric bypass and had the procedures captured using the Operating Room Black Box ® platform were included. Technical assessment was performed using the Objective Structured Assessment of Technical Skills and Generic Error Rating Tool instruments. For non-technical assessment, the Non-Technical Skills for Surgeons (NOTSS) and Scrub Practitioners' List of Intraoperative Non-Technical Skills (SPLINTS) tools were used. Spearman rank-order correlation and N-gram statistics were conducted. Fifty-six patients were included in the study and 90 procedural steps (gastrojejunostomy and jejunojejunostomy) were analysed. There was a moderate to strong correlation between technical adverse events (r s  = 0·417-0·687), rectifications (r s  = 0·380-0·768) and non-technical performance of the surgical and nursing teams (NOTSS and SPLINTS). N-gram statistics showed that after technical errors, events and prior rectifications, the staff surgeon and the scrub nurse exhibited the most positive non-technical behaviours, irrespective of operator (staff surgeon or fellow). This study demonstrated that technical and non-technical performances are related, on both an individual and a team level. Valuable data can be obtained around intraoperative errors, events and rectifications. © 2018 BJS Society Ltd Published by John Wiley & Sons Ltd.

  9. Complications: acknowledging, managing, and coping with human error.

    PubMed

    Helo, Sevann; Moulton, Carol-Anne E

    2017-08-01

    Errors are inherent in medicine due to the imperfectness of human nature. Health care providers may have a difficult time accepting their fallibility, acknowledging mistakes, and disclosing errors. Fear of litigation, shame, blame, and concern about reputation are just some of the barriers preventing physicians from being more candid with their patients, despite the supporting body of evidence that patients cite poor communication and lack of transparency as primary drivers to file a lawsuit in the wake of a medical complication. Proper error disclosure includes a timely explanation of what happened, who was involved, why the error occurred, and how it will be prevented in the future. Medical mistakes afford the opportunity for individuals and institutions to be candid about their weaknesses while improving patient care processes. When a physician takes the Hippocratic Oath they take on a tremendous sense of responsibility for the care of their patients, and often bear the burden of their mistakes in isolation. Physicians may struggle with guilt, shame, and a crisis of confidence, which may thwart efforts to identify areas for improvement that can lead to meaningful change. Coping strategies for providers include discussing the event with others, seeking professional counseling, and implementing quality improvement projects. Physicians and health care organizations need to find adaptive ways to deal with complications that will benefit patients, providers, and their institutions.

  10. Complications: acknowledging, managing, and coping with human error

    PubMed Central

    Moulton, Carol-Anne E.

    2017-01-01

    Errors are inherent in medicine due to the imperfectness of human nature. Health care providers may have a difficult time accepting their fallibility, acknowledging mistakes, and disclosing errors. Fear of litigation, shame, blame, and concern about reputation are just some of the barriers preventing physicians from being more candid with their patients, despite the supporting body of evidence that patients cite poor communication and lack of transparency as primary drivers to file a lawsuit in the wake of a medical complication. Proper error disclosure includes a timely explanation of what happened, who was involved, why the error occurred, and how it will be prevented in the future. Medical mistakes afford the opportunity for individuals and institutions to be candid about their weaknesses while improving patient care processes. When a physician takes the Hippocratic Oath they take on a tremendous sense of responsibility for the care of their patients, and often bear the burden of their mistakes in isolation. Physicians may struggle with guilt, shame, and a crisis of confidence, which may thwart efforts to identify areas for improvement that can lead to meaningful change. Coping strategies for providers include discussing the event with others, seeking professional counseling, and implementing quality improvement projects. Physicians and health care organizations need to find adaptive ways to deal with complications that will benefit patients, providers, and their institutions. PMID:28904910

  11. Technical intelligence and culture: Nut cracking in humans and chimpanzees.

    PubMed

    Boesch, Christophe; Bombjaková, Daša; Boyette, Adam; Meier, Amelia

    2017-06-01

    According to the technical intelligence hypothesis, humans are superior to all other animal species in understanding and using tools. However, the vast majority of comparative studies between humans and chimpanzees, both proficient tool users, have not controlled for the effects of age, prior knowledge, past experience, rearing conditions, or differences in experimental procedures. We tested whether humans are superior to chimpanzees in selecting better tools, using them more dexteriously, achieving higher performance and gaining access to more resource as predicted under the technical intelligence hypothesis. Aka and Mbendjele hunter-gatherers in the rainforest of Central African Republic and the Republic of Congo, respectively, and Taï chimpanzees in the rainforest of Côte d'Ivoire were observed cracking hard Panda oleosa nuts with different tools, as well as the soft Coula edulis and Elaeis guinensis nuts. The nut-cracking techniques, hammer material selection and two efficiency measures were compared. As predicted, the Aka and the Mbendjele were able to exploit more species of hard nuts in the forest than chimpanzees. However, the chimpanzees were sometimes more efficient than the humans. Social roles differed between the two species, with the Aka and especially the Mbendjele exhibiting cooperation between nut-crackers whereas the chimpanzees were mainly individualistic. Observations of nut-cracking by humans and chimpanzees only partially supported the technical intelligence hypothesis as higher degrees of flexibility in tool selection seen in chimpanzees compensated for use of less efficient tool material than in humans. Nut cracking was a stronger social undertaking in humans than in chimpanzees. © 2017 Wiley Periodicals, Inc.

  12. Development of an FAA-EUROCONTROL technique for the analysis of human error in ATM : final report.

    DOT National Transportation Integrated Search

    2002-07-01

    Human error has been identified as a dominant risk factor in safety-oriented industries such as air traffic control (ATC). However, little is known about the factors leading to human errors in current air traffic management (ATM) systems. The first s...

  13. Reducing measurement errors during functional capacity tests in elders.

    PubMed

    da Silva, Mariane Eichendorf; Orssatto, Lucas Bet da Rosa; Bezerra, Ewertton de Souza; Silva, Diego Augusto Santos; Moura, Bruno Monteiro de; Diefenthaeler, Fernando; Freitas, Cíntia de la Rocha

    2018-06-01

    Accuracy is essential to the validity of functional capacity measurements. To evaluate the error of measurement of functional capacity tests for elders and suggest the use of the technical error of measurement and credibility coefficient. Twenty elders (65.8 ± 4.5 years) completed six functional capacity tests that were simultaneously filmed and timed by four evaluators by means of a chronometer. A fifth evaluator timed the tests by analyzing the videos (reference data). The means of most evaluators for most tests were different from the reference (p < 0.05), except for two evaluators for two different tests. There were different technical error of measurement between tests and evaluators. The Bland-Altman test showed difference in the concordance of the results between methods. Short duration tests showed higher technical error of measurement than longer tests. In summary, tests timed by a chronometer underestimate the real results of the functional capacity. Difference between evaluators' reaction time and perception to determine the start and the end of the tests would justify the errors of measurement. Calculation of the technical error of measurement or the use of the camera can increase data validity.

  14. Spatial durbin error model for human development index in Province of Central Java.

    NASA Astrophysics Data System (ADS)

    Septiawan, A. R.; Handajani, S. S.; Martini, T. S.

    2018-05-01

    The Human Development Index (HDI) is an indicator used to measure success in building the quality of human life, explaining how people access development outcomes when earning income, health and education. Every year HDI in Central Java has improved to a better direction. In 2016, HDI in Central Java was 69.98 %, an increase of 0.49 % over the previous year. The objective of this study was to apply the spatial Durbin error model using angle weights queen contiguity to measure HDI in Central Java Province. Spatial Durbin error model is used because the model overcomes the spatial effect of errors and the effects of spatial depedency on the independent variable. Factors there use is life expectancy, mean years of schooling, expected years of schooling, and purchasing power parity. Based on the result of research, we get spatial Durbin error model for HDI in Central Java with influencing factors are life expectancy, mean years of schooling, expected years of schooling, and purchasing power parity.

  15. Human-system Interfaces to Automatic Systems: Review Guidance and Technical Basis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    OHara, J.M.; Higgins, J.C.

    Automation has become ubiquitous in modern complex systems and commercial nuclear power plants are no exception. Beyond the control of plant functions and systems, automation is applied to a wide range of additional functions including monitoring and detection, situation assessment, response planning, response implementation, and interface management. Automation has become a 'team player' supporting plant personnel in nearly all aspects of plant operation. In light of the increasing use and importance of automation in new and future plants, guidance is needed to enable the NRC staff to conduct safety reviews of the human factors engineering (HFE) aspects of modern automation.more » The objective of the research described in this report was to develop guidance for reviewing the operator's interface with automation. We first developed a characterization of the important HFE aspects of automation based on how it is implemented in current systems. The characterization included five dimensions: Level of automation, function of automation, modes of automation, flexibility of allocation, and reliability of automation. Next, we reviewed literature pertaining to the effects of these aspects of automation on human performance and the design of human-system interfaces (HSIs) for automation. Then, we used the technical basis established by the literature to develop design review guidance. The guidance is divided into the following seven topics: Automation displays, interaction and control, automation modes, automation levels, adaptive automation, error tolerance and failure management, and HSI integration. In addition, we identified insights into the automaton design process, operator training, and operations.« less

  16. Model-based influences on humans' choices and striatal prediction errors.

    PubMed

    Daw, Nathaniel D; Gershman, Samuel J; Seymour, Ben; Dayan, Peter; Dolan, Raymond J

    2011-03-24

    The mesostriatal dopamine system is prominently implicated in model-free reinforcement learning, with fMRI BOLD signals in ventral striatum notably covarying with model-free prediction errors. However, latent learning and devaluation studies show that behavior also shows hallmarks of model-based planning, and the interaction between model-based and model-free values, prediction errors, and preferences is underexplored. We designed a multistep decision task in which model-based and model-free influences on human choice behavior could be distinguished. By showing that choices reflected both influences we could then test the purity of the ventral striatal BOLD signal as a model-free report. Contrary to expectations, the signal reflected both model-free and model-based predictions in proportions matching those that best explained choice behavior. These results challenge the notion of a separate model-free learner and suggest a more integrated computational architecture for high-level human decision-making. Copyright © 2011 Elsevier Inc. All rights reserved.

  17. Sleep quality, posttraumatic stress, depression, and human errors in train drivers: a population-based nationwide study in South Korea.

    PubMed

    Jeon, Hong Jin; Kim, Ji-Hae; Kim, Bin-Na; Park, Seung Jin; Fava, Maurizio; Mischoulon, David; Kang, Eun-Ho; Roh, Sungwon; Lee, Dongsoo

    2014-12-01

    Human error is defined as an unintended error that is attributable to humans rather than machines, and that is important to avoid to prevent accidents. We aimed to investigate the association between sleep quality and human errors among train drivers. Cross-sectional. Population-based. A sample of 5,480 subjects who were actively working as train drivers were recruited in South Korea. The participants were 4,634 drivers who completed all questionnaires (response rate 84.6%). None. The Pittsburgh Sleep Quality Index (PSQI), the Center for Epidemiologic Studies Depression Scale (CES-D), the Impact of Event Scale-Revised (IES-R), the State-Trait Anxiety Inventory (STAI), and the Korean Occupational Stress Scale (KOSS). Of 4,634 train drivers, 349 (7.5%) showed more than one human error per 5 y. Human errors were associated with poor sleep quality, higher PSQI total scores, short sleep duration at night, and longer sleep latency. Among train drivers with poor sleep quality, those who experienced severe posttraumatic stress showed a significantly higher number of human errors than those without. Multiple logistic regression analysis showed that human errors were significantly associated with poor sleep quality and posttraumatic stress, whereas there were no significant associations with depression, trait and state anxiety, and work stress after adjusting for age, sex, education years, marital status, and career duration. Poor sleep quality was found to be associated with more human errors in train drivers, especially in those who experienced severe posttraumatic stress. © 2014 Associated Professional Sleep Societies, LLC.

  18. Good people who try their best can have problems: recognition of human factors and how to minimise error.

    PubMed

    Brennan, Peter A; Mitchell, David A; Holmes, Simon; Plint, Simon; Parry, David

    2016-01-01

    Human error is as old as humanity itself and is an appreciable cause of mistakes by both organisations and people. Much of the work related to human factors in causing error has originated from aviation where mistakes can be catastrophic not only for those who contribute to the error, but for passengers as well. The role of human error in medical and surgical incidents, which are often multifactorial, is becoming better understood, and includes both organisational issues (by the employer) and potential human factors (at a personal level). Mistakes as a result of individual human factors and surgical teams should be better recognised and emphasised. Attitudes and acceptance of preoperative briefing has improved since the introduction of the World Health Organization (WHO) surgical checklist. However, this does not address limitations or other safety concerns that are related to performance, such as stress and fatigue, emotional state, hunger, awareness of what is going on situational awareness, and other factors that could potentially lead to error. Here we attempt to raise awareness of these human factors, and highlight how they can lead to error, and how they can be minimised in our day-to-day practice. Can hospitals move from being "high risk industries" to "high reliability organisations"? Copyright © 2015 The British Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  19. Human factors evaluation of remote afterloading brachytherapy: Human error and critical tasks in remote afterloading brachytherapy and approaches for improved system performance. Volume 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Callan, J.R.; Kelly, R.T.; Quinn, M.L.

    1995-05-01

    Remote Afterloading Brachytherapy (RAB) is a medical process used in the treatment of cancer. RAB uses a computer-controlled device to remotely insert and remove radioactive sources close to a target (or tumor) in the body. Some RAB problems affecting the radiation dose to the patient have been reported and attributed to human error. To determine the root cause of human error in the RAB system, a human factors team visited 23 RAB treatment sites in the US The team observed RAB treatment planning and delivery, interviewed RAB personnel, and performed walk-throughs, during which staff demonstrated the procedures and practices usedmore » in performing RAB tasks. Factors leading to human error in the RAB system were identified. The impact of those factors on the performance of RAB was then evaluated and prioritized in terms of safety significance. Finally, the project identified and evaluated alternative approaches for resolving the safety significant problems related to human error.« less

  20. Accounting for measurement error in human life history trade-offs using structural equation modeling.

    PubMed

    Helle, Samuli

    2018-03-01

    Revealing causal effects from correlative data is very challenging and a contemporary problem in human life history research owing to the lack of experimental approach. Problems with causal inference arising from measurement error in independent variables, whether related either to inaccurate measurement technique or validity of measurements, seem not well-known in this field. The aim of this study is to show how structural equation modeling (SEM) with latent variables can be applied to account for measurement error in independent variables when the researcher has recorded several indicators of a hypothesized latent construct. As a simple example of this approach, measurement error in lifetime allocation of resources to reproduction in Finnish preindustrial women is modelled in the context of the survival cost of reproduction. In humans, lifetime energetic resources allocated in reproduction are almost impossible to quantify with precision and, thus, typically used measures of lifetime reproductive effort (e.g., lifetime reproductive success and parity) are likely to be plagued by measurement error. These results are contrasted with those obtained from a traditional regression approach where the single best proxy of lifetime reproductive effort available in the data is used for inference. As expected, the inability to account for measurement error in women's lifetime reproductive effort resulted in the underestimation of its underlying effect size on post-reproductive survival. This article emphasizes the advantages that the SEM framework can provide in handling measurement error via multiple-indicator latent variables in human life history studies. © 2017 Wiley Periodicals, Inc.

  1. The current approach to human error and blame in the NHS.

    PubMed

    Ottewill, Melanie

    There is a large body of research to suggest that serious errors are widespread throughout medicine. The traditional response to these adverse events has been to adopt a 'person approach' - blaming the individual seen as 'responsible'. The culture of medicine is highly complicit in this response. Such an approach results in enormous personal costs to the individuals concerned and does little to address the root causes of errors and thus prevent their recurrence. Other industries, such as aviation, where safety is a paramount concern and which have similar structures to the medical profession, have, over the past decade or so, adopted a 'systems' approach to error, recognizing that human error is ubiquitous and inevitable and that systems need to be developed with this in mind. This approach has been highly successful, but has necessitated, first and foremost, a cultural shift. It is in the best interests of patients, and medical professionals alike, that such a shift is embraced in the NHS.

  2. How we learn to make decisions: rapid propagation of reinforcement learning prediction errors in humans.

    PubMed

    Krigolson, Olav E; Hassall, Cameron D; Handy, Todd C

    2014-03-01

    Our ability to make decisions is predicated upon our knowledge of the outcomes of the actions available to us. Reinforcement learning theory posits that actions followed by a reward or punishment acquire value through the computation of prediction errors-discrepancies between the predicted and the actual reward. A multitude of neuroimaging studies have demonstrated that rewards and punishments evoke neural responses that appear to reflect reinforcement learning prediction errors [e.g., Krigolson, O. E., Pierce, L. J., Holroyd, C. B., & Tanaka, J. W. Learning to become an expert: Reinforcement learning and the acquisition of perceptual expertise. Journal of Cognitive Neuroscience, 21, 1833-1840, 2009; Bayer, H. M., & Glimcher, P. W. Midbrain dopamine neurons encode a quantitative reward prediction error signal. Neuron, 47, 129-141, 2005; O'Doherty, J. P. Reward representations and reward-related learning in the human brain: Insights from neuroimaging. Current Opinion in Neurobiology, 14, 769-776, 2004; Holroyd, C. B., & Coles, M. G. H. The neural basis of human error processing: Reinforcement learning, dopamine, and the error-related negativity. Psychological Review, 109, 679-709, 2002]. Here, we used the brain ERP technique to demonstrate that not only do rewards elicit a neural response akin to a prediction error but also that this signal rapidly diminished and propagated to the time of choice presentation with learning. Specifically, in a simple, learnable gambling task, we show that novel rewards elicited a feedback error-related negativity that rapidly decreased in amplitude with learning. Furthermore, we demonstrate the existence of a reward positivity at choice presentation, a previously unreported ERP component that has a similar timing and topography as the feedback error-related negativity that increased in amplitude with learning. The pattern of results we observed mirrored the output of a computational model that we implemented to compute reward

  3. Impact of human error on lumber yield in rough mills

    Treesearch

    Urs Buehlmann; R. Edward Thomas; R. Edward Thomas

    2002-01-01

    Rough sawn, kiln-dried lumber contains characteristics such as knots and bark pockets that are considered by most people to be defects. When using boards to produce furniture components, these defects are removed to produce clear, defect-free parts. Currently, human operators identify and locate the unusable board areas containing defects. Errors in determining a...

  4. Human-simulation-based learning to prevent medication error: A systematic review.

    PubMed

    Sarfati, Laura; Ranchon, Florence; Vantard, Nicolas; Schwiertz, Vérane; Larbre, Virginie; Parat, Stéphanie; Faudel, Amélie; Rioufol, Catherine

    2018-01-31

    In the past 2 decades, there has been an increasing interest in simulation-based learning programs to prevent medication error (ME). To improve knowledge, skills, and attitudes in prescribers, nurses, and pharmaceutical staff, these methods enable training without directly involving patients. However, best practices for simulation for healthcare providers are as yet undefined. By analysing the current state of experience in the field, the present review aims to assess whether human simulation in healthcare helps to reduce ME. A systematic review was conducted on Medline from 2000 to June 2015, associating the terms "Patient Simulation," "Medication Errors," and "Simulation Healthcare." Reports of technology-based simulation were excluded, to focus exclusively on human simulation in nontechnical skills learning. Twenty-one studies assessing simulation-based learning programs were selected, focusing on pharmacy, medicine or nursing students, or concerning programs aimed at reducing administration or preparation errors, managing crises, or learning communication skills for healthcare professionals. The studies varied in design, methodology, and assessment criteria. Few demonstrated that simulation was more effective than didactic learning in reducing ME. This review highlights a lack of long-term assessment and real-life extrapolation, with limited scenarios and participant samples. These various experiences, however, help in identifying the key elements required for an effective human simulation-based learning program for ME prevention: ie, scenario design, debriefing, and perception assessment. The performance of these programs depends on their ability to reflect reality and on professional guidance. Properly regulated simulation is a good way to train staff in events that happen only exceptionally, as well as in standard daily activities. By integrating human factors, simulation seems to be effective in preventing iatrogenic risk related to ME, if the program is

  5. Action errors, error management, and learning in organizations.

    PubMed

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  6. Determination of facial symmetry in unilateral cleft lip and palate patients from three-dimensional data: technical report and assessment of measurement errors.

    PubMed

    Nkenke, Emeka; Lehner, Bernhard; Kramer, Manuel; Haeusler, Gerd; Benz, Stefanie; Schuster, Maria; Neukam, Friedrich W; Vairaktaris, Eleftherios G; Wurm, Jochen

    2006-03-01

    To assess measurement errors of a novel technique for the three-dimensional determination of the degree of facial symmetry in patients suffering from unilateral cleft lip and palate malformations. Technical report, reliability study. Cleft Lip and Palate Center of the University of Erlangen-Nuremberg, Erlangen, Germany. The three-dimensional facial surface data of five 10-year-old unilateral cleft lip and palate patients were subjected to the analysis. Distances, angles, surface areas, and volumes were assessed twice. Calculations were made for method error, intraclass correlation coefficient, and repeatability of the measurements of distances, angles, surface areas, and volumes. The method errors were less than 1 mm for distances and less than 1.5 degrees for angles. The intraclass correlation coefficients showed values greater than .90 for all parameters. The repeatability values were comparable for cleft and noncleft sides. The small method errors, high intraclass correlation coefficients, and comparable repeatability values for cleft and noncleft sides reveal that the new technique is appropriate for clinical use.

  7. Developing Human Resources for the Technical Workforce: A Comparative Study of Korea and Thailand

    ERIC Educational Resources Information Center

    Hawley, Joshua D.; Paek, Jeeyon

    2005-01-01

    Asian countries face significant and growing shortages of technically skilled workers. Vocational-technical systems are key components of national human resource development. Using labor market data from Thailand and Korea, this paper analyzes the economic payoff for individual investment in vocational-technical education, and subsequent…

  8. Hierarchical learning induces two simultaneous, but separable, prediction errors in human basal ganglia.

    PubMed

    Diuk, Carlos; Tsai, Karin; Wallis, Jonathan; Botvinick, Matthew; Niv, Yael

    2013-03-27

    Studies suggest that dopaminergic neurons report a unitary, global reward prediction error signal. However, learning in complex real-life tasks, in particular tasks that show hierarchical structure, requires multiple prediction errors that may coincide in time. We used functional neuroimaging to measure prediction error signals in humans performing such a hierarchical task involving simultaneous, uncorrelated prediction errors. Analysis of signals in a priori anatomical regions of interest in the ventral striatum and the ventral tegmental area indeed evidenced two simultaneous, but separable, prediction error signals corresponding to the two levels of hierarchy in the task. This result suggests that suitably designed tasks may reveal a more intricate pattern of firing in dopaminergic neurons. Moreover, the need for downstream separation of these signals implies possible limitations on the number of different task levels that we can learn about simultaneously.

  9. Hierarchical Learning Induces Two Simultaneous, But Separable, Prediction Errors in Human Basal Ganglia

    PubMed Central

    Tsai, Karin; Wallis, Jonathan; Botvinick, Matthew

    2013-01-01

    Studies suggest that dopaminergic neurons report a unitary, global reward prediction error signal. However, learning in complex real-life tasks, in particular tasks that show hierarchical structure, requires multiple prediction errors that may coincide in time. We used functional neuroimaging to measure prediction error signals in humans performing such a hierarchical task involving simultaneous, uncorrelated prediction errors. Analysis of signals in a priori anatomical regions of interest in the ventral striatum and the ventral tegmental area indeed evidenced two simultaneous, but separable, prediction error signals corresponding to the two levels of hierarchy in the task. This result suggests that suitably designed tasks may reveal a more intricate pattern of firing in dopaminergic neurons. Moreover, the need for downstream separation of these signals implies possible limitations on the number of different task levels that we can learn about simultaneously. PMID:23536092

  10. Human errors and occupational injuries of older female workers in the residential healthcare facilities for the elderly.

    PubMed

    Kim, Jun Sik; Jeong, Byung Yong

    2018-05-03

    The study aimed to describe the characteristics of occupational injuries of female workers in the residential healthcare facilities for the elderly, and analyze human errors as causes of accidents. From the national industrial accident compensation data, 506 female injuries were analyzed by age and occupation. The results showed that medical service worker was the most prevalent (54.1%), followed by social welfare worker (20.4%). Among injuries, 55.7% were <1 year of work experience, and 37.9% were ≥60 years old. Slips/falls were the most common type of accident (42.7%), and proportion of injured by slips/falls increases with age. Among human errors, action errors were the primary reasons, followed by perception errors, and cognition errors. Besides, the ratios of injuries by perception errors and action errors increase with age, respectively. The findings of this study suggest that there is a need to design workplaces that accommodate the characteristics of older female workers.

  11. Evaluating a medical error taxonomy.

    PubMed

    Brixey, Juliana; Johnson, Todd R; Zhang, Jiajie

    2002-01-01

    Healthcare has been slow in using human factors principles to reduce medical errors. The Center for Devices and Radiological Health (CDRH) recognizes that a lack of attention to human factors during product development may lead to errors that have the potential for patient injury, or even death. In response to the need for reducing medication errors, the National Coordinating Council for Medication Errors Reporting and Prevention (NCC MERP) released the NCC MERP taxonomy that provides a standard language for reporting medication errors. This project maps the NCC MERP taxonomy of medication error to MedWatch medical errors involving infusion pumps. Of particular interest are human factors associated with medical device errors. The NCC MERP taxonomy of medication errors is limited in mapping information from MEDWATCH because of the focus on the medical device and the format of reporting.

  12. A Systematic Approach to Error Free Telemetry

    DTIC Science & Technology

    2017-06-28

    A SYSTEMATIC APPROACH TO ERROR FREE TELEMETRY 412TW-TIM-17-03 DISTRIBUTION A: Approved for public release. Distribution is...Systematic Approach to Error-Free Telemetry) was submitted by the Commander, 412th Test Wing, Edwards AFB, California 93524. Prepared by...Technical Information Memorandum 3. DATES COVERED (From - Through) February 2016 4. TITLE AND SUBTITLE A Systematic Approach to Error-Free

  13. Non-technical skills training to enhance patient safety.

    PubMed

    Gordon, Morris

    2013-06-01

      Patient safety is an increasingly recognised issue in health care. Systems-based and organisational methods of quality improvement, as well as education focusing on key clinical areas, are common, but there are few reports of educational interventions that focus on non-technical skills to address human factor sources of error. A flexible model for non-technical skills training for health care professionals has been designed based on the best available evidence, and with sound theoretical foundations.   Educational sessions to improve non-technical skills in health care have been described before. The descriptions lack the details to allow educators to replicate and innovate further.   A non-technical skills training course that can be delivered as either a half- or full-day intervention has been designed and delivered to a number of mixed groups of undergraduate medical students and doctors in postgraduate training. Participant satisfaction has been high and patient safety attitudes have improved post-intervention.   This non-technical skills educational intervention has been built on a sound evidence base, and is described so as to facilitate replication and dissemination. With the key themes laid out, clinical educators will be able to build interventions focused on numerous clinical issues that pay attention to human factor contributors to safety. © 2013 John Wiley & Sons Ltd.

  14. Contextual Sensing: Integrating Contextual Information with Human and Technical Geo-Sensor Information for Smart Cities

    PubMed Central

    Sagl, Günther; Resch, Bernd; Blaschke, Thomas

    2015-01-01

    In this article we critically discuss the challenge of integrating contextual information, in particular spatiotemporal contextual information, with human and technical sensor information, which we approach from a geospatial perspective. We start by highlighting the significance of context in general and spatiotemporal context in particular and introduce a smart city model of interactions between humans, the environment, and technology, with context at the common interface. We then focus on both the intentional and the unintentional sensing capabilities of today’s technologies and discuss current technological trends that we consider have the ability to enrich human and technical geo-sensor information with contextual detail. The different types of sensors used to collect contextual information are analyzed and sorted into three groups on the basis of names considering frequently used related terms, and characteristic contextual parameters. These three groups, namely technical in situ sensors, technical remote sensors, and human sensors are analyzed and linked to three dimensions involved in sensing (data generation, geographic phenomena, and type of sensing). In contrast to other scientific publications, we found a large number of technologies and applications using in situ and mobile technical sensors within the context of smart cities, and surprisingly limited use of remote sensing approaches. In this article we further provide a critical discussion of possible impacts and influences of both technical and human sensing approaches on society, pointing out that a larger number of sensors, increased fusion of information, and the use of standardized data formats and interfaces will not necessarily result in any improvement in the quality of life of the citizens of a smart city. This article seeks to improve our understanding of technical and human geo-sensing capabilities, and to demonstrate that the use of such sensors can facilitate the integration of different

  15. Contextual Sensing: Integrating Contextual Information with Human and Technical Geo-Sensor Information for Smart Cities.

    PubMed

    Sagl, Günther; Resch, Bernd; Blaschke, Thomas

    2015-07-14

    In this article we critically discuss the challenge of integrating contextual information, in particular spatiotemporal contextual information, with human and technical sensor information, which we approach from a geospatial perspective. We start by highlighting the significance of context in general and spatiotemporal context in particular and introduce a smart city model of interactions between humans, the environment, and technology, with context at the common interface. We then focus on both the intentional and the unintentional sensing capabilities of today's technologies and discuss current technological trends that we consider have the ability to enrich human and technical geo-sensor information with contextual detail. The different types of sensors used to collect contextual information are analyzed and sorted into three groups on the basis of names considering frequently used related terms, and characteristic contextual parameters. These three groups, namely technical in situ sensors, technical remote sensors, and human sensors are analyzed and linked to three dimensions involved in sensing (data generation, geographic phenomena, and type of sensing). In contrast to other scientific publications, we found a large number of technologies and applications using in situ and mobile technical sensors within the context of smart cities, and surprisingly limited use of remote sensing approaches. In this article we further provide a critical discussion of possible impacts and influences of both technical and human sensing approaches on society, pointing out that a larger number of sensors, increased fusion of information, and the use of standardized data formats and interfaces will not necessarily result in any improvement in the quality of life of the citizens of a smart city. This article seeks to improve our understanding of technical and human geo-sensing capabilities, and to demonstrate that the use of such sensors can facilitate the integration of different

  16. The Error in Total Error Reduction

    PubMed Central

    Witnauer, James E.; Urcelay, Gonzalo P.; Miller, Ralph R.

    2013-01-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modelling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. PMID:23891930

  17. The error in total error reduction.

    PubMed

    Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R

    2014-02-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modeling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. Human factors in surgery: from Three Mile Island to the operating room.

    PubMed

    D'Addessi, Alessandro; Bongiovanni, Luca; Volpe, Andrea; Pinto, Francesco; Bassi, PierFrancesco

    2009-01-01

    Human factors is a definition that includes the science of understanding the properties of human capability, the application of this understanding to the design and development of systems and services, the art of ensuring their successful applications to a program. The field of human factors traces its origins to the Second World War, but Three Mile Island has been the best example of how groups of people react and make decisions under stress: this nuclear accident was exacerbated by wrong decisions made because the operators were overwhelmed with irrelevant, misleading or incorrect information. Errors and their nature are the same in all human activities. The predisposition for error is so intrinsic to human nature that scientifically it is best considered as inherently biologic. The causes of error in medical care may not be easily generalized. Surgery differs in important ways: most errors occur in the operating room and are technical in nature. Commonly, surgical error has been thought of as the consequence of lack of skill or ability, and is the result of thoughtless actions. Moreover the 'operating theatre' has a unique set of team dynamics: professionals from multiple disciplines are required to work in a closely coordinated fashion. This complex environment provides multiple opportunities for unclear communication, clashing motivations, errors arising not from technical incompetence but from poor interpersonal skills. Surgeons have to work closely with human factors specialists in future studies. By improving processes already in place in many operating rooms, safety will be enhanced and quality increased.

  19. Detection of Error Related Neuronal Responses Recorded by Electrocorticography in Humans during Continuous Movements

    PubMed Central

    Milekovic, Tomislav; Ball, Tonio; Schulze-Bonhage, Andreas; Aertsen, Ad; Mehring, Carsten

    2013-01-01

    Background Brain-machine interfaces (BMIs) can translate the neuronal activity underlying a user’s movement intention into movements of an artificial effector. In spite of continuous improvements, errors in movement decoding are still a major problem of current BMI systems. If the difference between the decoded and intended movements becomes noticeable, it may lead to an execution error. Outcome errors, where subjects fail to reach a certain movement goal, are also present during online BMI operation. Detecting such errors can be beneficial for BMI operation: (i) errors can be corrected online after being detected and (ii) adaptive BMI decoding algorithm can be updated to make fewer errors in the future. Methodology/Principal Findings Here, we show that error events can be detected from human electrocorticography (ECoG) during a continuous task with high precision, given a temporal tolerance of 300–400 milliseconds. We quantified the error detection accuracy and showed that, using only a small subset of 2×2 ECoG electrodes, 82% of detection information for outcome error and 74% of detection information for execution error available from all ECoG electrodes could be retained. Conclusions/Significance The error detection method presented here could be used to correct errors made during BMI operation or to adapt a BMI algorithm to make fewer errors in the future. Furthermore, our results indicate that smaller ECoG implant could be used for error detection. Reducing the size of an ECoG electrode implant used for BMI decoding and error detection could significantly reduce the medical risk of implantation. PMID:23383315

  20. Technical Note: Error metrics for estimating the accuracy of needle/instrument placement during transperineal magnetic resonance/ultrasound-guided prostate interventions.

    PubMed

    Bonmati, Ester; Hu, Yipeng; Villarini, Barbara; Rodell, Rachael; Martin, Paul; Han, Lianghao; Donaldson, Ian; Ahmed, Hashim U; Moore, Caroline M; Emberton, Mark; Barratt, Dean C

    2018-04-01

    Image-guided systems that fuse magnetic resonance imaging (MRI) with three-dimensional (3D) ultrasound (US) images for performing targeted prostate needle biopsy and minimally invasive treatments for prostate cancer are of increasing clinical interest. To date, a wide range of different accuracy estimation procedures and error metrics have been reported, which makes comparing the performance of different systems difficult. A set of nine measures are presented to assess the accuracy of MRI-US image registration, needle positioning, needle guidance, and overall system error, with the aim of providing a methodology for estimating the accuracy of instrument placement using a MR/US-guided transperineal approach. Using the SmartTarget fusion system, an MRI-US image alignment error was determined to be 2.0 ± 1.0 mm (mean ± SD), and an overall system instrument targeting error of 3.0 ± 1.2 mm. Three needle deployments for each target phantom lesion was found to result in a 100% lesion hit rate and a median predicted cancer core length of 5.2 mm. The application of a comprehensive, unbiased validation assessment for MR/US guided systems can provide useful information on system performance for quality assurance and system comparison. Furthermore, such an analysis can be helpful in identifying relationships between these errors, providing insight into the technical behavior of these systems. © 2018 American Association of Physicists in Medicine.

  1. Human Resource Development: Technical Education's Challenge. Proceedings of the Annual National Clinic on Technical Education (12th, Spokane, Washington, March 26-28, 1975).

    ERIC Educational Resources Information Center

    Washington State Community Coll. District 17, Spokane.

    Speeches and discussions are transcribed in this report, which also includes a listing of the American Technical Education Association (ATEA) committee members, exhibitors, officers, and directory of speakers. Speeches covered "Human Resource Development" by Gene Rutledge; "The Impact of Technical Education on Economic…

  2. Improving Safety through Human Factors Engineering.

    PubMed

    Siewert, Bettina; Hochman, Mary G

    2015-10-01

    Human factors engineering (HFE) focuses on the design and analysis of interactive systems that involve people, technical equipment, and work environment. HFE is informed by knowledge of human characteristics. It complements existing patient safety efforts by specifically taking into consideration that, as humans, frontline staff will inevitably make mistakes. Therefore, the systems with which they interact should be designed for the anticipation and mitigation of human errors. The goal of HFE is to optimize the interaction of humans with their work environment and technical equipment to maximize safety and efficiency. Special safeguards include usability testing, standardization of processes, and use of checklists and forcing functions. However, the effectiveness of the safety program and resiliency of the organization depend on timely reporting of all safety events independent of patient harm, including perceived potential risks, bad outcomes that occur even when proper protocols have been followed, and episodes of "improvisation" when formal guidelines are found not to exist. Therefore, an institution must adopt a robust culture of safety, where the focus is shifted from blaming individuals for errors to preventing future errors, and where barriers to speaking up-including barriers introduced by steep authority gradients-are minimized. This requires creation of formal guidelines to address safety concerns, establishment of unified teams with open communication and shared responsibility for patient safety, and education of managers and senior physicians to perceive the reporting of safety concerns as a benefit rather than a threat. © RSNA, 2015.

  3. Proceedings of the Annual National Clinic on Technical Education (12th, Spokane, Washington, March 26-28, 1975). Human Resource Development: Technical Education's Challenge.

    ERIC Educational Resources Information Center

    Rutledge, Gene; And Others

    This report includes the presentations of the speakers appearing before the National Clinic on Technical Education. Topics cover human resource development; the impact of technical education on economic development (in Mississippi); economics of allied health education; manpower implications of environmental protection; manpower needs for…

  4. Human Factors and Ergonomics for the Dental Profession.

    PubMed

    Ross, Al

    2016-09-01

    This paper proposes that the science of Human Factors and Ergonomics (HFE) is suitable for wide application in dental education, training and practice to improve safety, quality and efficiency. Three areas of interest are highlighted. First it is proposed that individual and team Non-Technical Skills (NTS), such as communication, leadership and stress management can improve error rates and efficiency of procedures. Secondly, in a physically and technically challenging environment, staff can benefit from ergonomic principles which examine design in supporting safe work. Finally, examination of organizational human factors can help anticipate stressors and plan for flexible responses to multiple, variable demands, and fluctuating resources. Clinical relevance: HFE is an evidence-based approach to reducing error rates and procedural complications, and avoiding problems associated with stress and fatigue. Improved teamwork and organizational planning and efficiency can impact directly on patient outcomes.

  5. The Consolidated Human Activity Database — Master Version (CHAD-Master) Technical Memorandum

    EPA Pesticide Factsheets

    This technical memorandum contains information about the Consolidated Human Activity Database -- Master version, including CHAD contents, inventory of variables: Questionnaire files and Event files, CHAD codes, and references.

  6. 21 CFR 14.160 - Establishment of standing technical advisory committees for human prescription drugs.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... committees for human prescription drugs. 14.160 Section 14.160 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL PUBLIC HEARING BEFORE A PUBLIC ADVISORY COMMITTEE Advisory Committees for Human Prescription Drugs § 14.160 Establishment of standing technical advisory committees for...

  7. 21 CFR 14.160 - Establishment of standing technical advisory committees for human prescription drugs.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... committees for human prescription drugs. 14.160 Section 14.160 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL PUBLIC HEARING BEFORE A PUBLIC ADVISORY COMMITTEE Advisory Committees for Human Prescription Drugs § 14.160 Establishment of standing technical advisory committees for...

  8. 21 CFR 14.160 - Establishment of standing technical advisory committees for human prescription drugs.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... committees for human prescription drugs. 14.160 Section 14.160 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL PUBLIC HEARING BEFORE A PUBLIC ADVISORY COMMITTEE Advisory Committees for Human Prescription Drugs § 14.160 Establishment of standing technical advisory committees for...

  9. Monte Carlo simulation of expert judgments on human errors in chemical analysis--a case study of ICP-MS.

    PubMed

    Kuselman, Ilya; Pennecchi, Francesca; Epstein, Malka; Fajgelj, Ales; Ellison, Stephen L R

    2014-12-01

    Monte Carlo simulation of expert judgments on human errors in a chemical analysis was used for determination of distributions of the error quantification scores (scores of likelihood and severity, and scores of effectiveness of a laboratory quality system in prevention of the errors). The simulation was based on modeling of an expert behavior: confident, reasonably doubting and irresolute expert judgments were taken into account by means of different probability mass functions (pmfs). As a case study, 36 scenarios of human errors which may occur in elemental analysis of geological samples by ICP-MS were examined. Characteristics of the score distributions for three pmfs of an expert behavior were compared. Variability of the scores, as standard deviation of the simulated score values from the distribution mean, was used for assessment of the score robustness. A range of the score values, calculated directly from elicited data and simulated by a Monte Carlo method for different pmfs, was also discussed from the robustness point of view. It was shown that robustness of the scores, obtained in the case study, can be assessed as satisfactory for the quality risk management and improvement of a laboratory quality system against human errors. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Using a site-specific technical error to establish training responsiveness: a preliminary explorative study.

    PubMed

    Weatherwax, Ryan M; Harris, Nigel K; Kilding, Andrew E; Dalleck, Lance C

    2018-01-01

    Even though cardiorespiratory fitness (CRF) training elicits numerous health benefits, not all individuals have positive training responses following a structured CRF intervention. It has been suggested that the technical error (TE), a combination of biological variability and measurement error, should be used to establish specific training responsiveness criteria to gain further insight on the effectiveness of the training program. To date, most training interventions use an absolute change or a TE from previous findings, which do not take into consideration the training site and equipment used to establish training outcomes or the specific cohort being evaluated. The purpose of this investigation was to retrospectively analyze training responsiveness of two CRF training interventions using two common criteria and a site-specific TE. Sixteen men and women completed two maximal graded exercise tests and verification bouts to identify maximal oxygen consumption (VO 2 max) and establish a site-specific TE. The TE was then used to retrospectively analyze training responsiveness in comparison to commonly used criteria: percent change of >0% and >+5.6% in VO 2 max. The TE was found to be 7.7% for relative VO 2 max. χ 2 testing showed significant differences in all training criteria for each intervention and pooled data from both interventions, except between %Δ >0 and %Δ >+7.7% in one of the investigations. Training nonresponsiveness ranged from 11.5% to 34.6%. Findings from the present study support the utility of site-specific TE criterion to quantify training responsiveness. A similar methodology of establishing a site-specific and even cohort specific TE should be considered to establish when true cardiorespiratory training adaptations occur.

  11. Technical Guidance for Constructing a Human Well-Being ...

    EPA Pesticide Factsheets

    The U.S. Environmental Protection Agency (EPA) Office of Research and Development’s Sustainable and Healthy Communities Research Program (EPA 2015) developed the Human Well-being Index (HWBI) as an integrative measure of economic, social, and environmental contributions to well-being. The HWBI is composed of indicators and metrics representing eight domains of well-being: connection to nature, cultural fulfillment, education, health, leisure time, living standards, safety and security, and social cohesion. The domains and indicators in the HWBI were selected to provide a well-being framework that is broadly applicable to many different populations and communities, and can be customized using community-specific metrics. A primary purpose of this report is to adapt the US Human Well-Being Index (HWBI) to quantify human well-being for Puerto Rico. Additionally, our adaptation of the HWBI for Puerto Rico provides an example of how the HWBI can be adapted to different communities and technical guidance on processing data and calculating index using R.

  12. Uncorrected refractive errors.

    PubMed

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  13. Uncorrected refractive errors

    PubMed Central

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship. PMID:22944755

  14. The Public Understanding of Error in Educational Assessment

    ERIC Educational Resources Information Center

    Gardner, John

    2013-01-01

    Evidence from recent research suggests that in the UK the public perception of errors in national examinations is that they are simply mistakes; events that are preventable. This perception predominates over the more sophisticated technical view that errors arise from many sources and create an inevitable variability in assessment outcomes. The…

  15. The dynamics of error processing in the human brain as reflected by high-gamma activity in noninvasive and intracranial EEG.

    PubMed

    Völker, Martin; Fiederer, Lukas D J; Berberich, Sofie; Hammer, Jiří; Behncke, Joos; Kršek, Pavel; Tomášek, Martin; Marusič, Petr; Reinacher, Peter C; Coenen, Volker A; Helias, Moritz; Schulze-Bonhage, Andreas; Burgard, Wolfram; Ball, Tonio

    2018-06-01

    Error detection in motor behavior is a fundamental cognitive function heavily relying on local cortical information processing. Neural activity in the high-gamma frequency band (HGB) closely reflects such local cortical processing, but little is known about its role in error processing, particularly in the healthy human brain. Here we characterize the error-related response of the human brain based on data obtained with noninvasive EEG optimized for HGB mapping in 31 healthy subjects (15 females, 16 males), and additional intracranial EEG data from 9 epilepsy patients (4 females, 5 males). Our findings reveal a multiscale picture of the global and local dynamics of error-related HGB activity in the human brain. On the global level as reflected in the noninvasive EEG, the error-related response started with an early component dominated by anterior brain regions, followed by a shift to parietal regions, and a subsequent phase characterized by sustained parietal HGB activity. This phase lasted for more than 1 s after the error onset. On the local level reflected in the intracranial EEG, a cascade of both transient and sustained error-related responses involved an even more extended network, spanning beyond frontal and parietal regions to the insula and the hippocampus. HGB mapping appeared especially well suited to investigate late, sustained components of the error response, possibly linked to downstream functional stages such as error-related learning and behavioral adaptation. Our findings establish the basic spatio-temporal properties of HGB activity as a neural correlate of error processing, complementing traditional error-related potential studies. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  16. Does the A-not-B error in adult pet dogs indicate sensitivity to human communication?

    PubMed

    Kis, Anna; Topál, József; Gácsi, Márta; Range, Friederike; Huber, Ludwig; Miklósi, Adám; Virányi, Zsófia

    2012-07-01

    Recent dog-infant comparisons have indicated that the experimenter's communicative signals in object hide-and-search tasks increase the probability of perseverative (A-not-B) errors in both species (Topál et al. 2009). These behaviourally similar results, however, might reflect different mechanisms in dogs and in children. Similar errors may occur if the motor response of retrieving the object during the A trials cannot be inhibited in the B trials or if the experimenter's movements and signals toward the A hiding place in the B trials ('sham-baiting') distract the dogs' attention. In order to test these hypotheses, we tested dogs similarly to Topál et al. (2009) but eliminated the motor search in the A trials and 'sham-baiting' in the B trials. We found that neither an inability to inhibit previously rewarded motor response nor insufficiencies in their working memory and/or attention skills can explain dogs' erroneous choices. Further, we replicated the finding that dogs have a strong tendency to commit the A-not-B error after ostensive-communicative hiding and demonstrated the crucial effect of socio-communicative cues as the A-not-B error diminishes when location B is ostensively enhanced. These findings further support the hypothesis that the dogs' A-not-B error may reflect a special sensitivity to human communicative cues. Such object-hiding and search tasks provide a typical case for how susceptibility to human social signals could (mis)lead domestic dogs.

  17. Error Pattern Analysis Applied to Technical Writing: An Editor's Guide for Writers.

    ERIC Educational Resources Information Center

    Monagle, E. Brette

    The use of error pattern analysis can reduce the time and money spent on editing and correcting manuscripts. What is required is noting, classifying, and keeping a frequency count of errors. First an editor should take a typical page of writing and circle each error. After the editor has done a sufficiently large number of pages to identify an…

  18. Application of the epidemiological model in studying human error in aviation

    NASA Technical Reports Server (NTRS)

    Cheaney, E. S.; Billings, C. E.

    1981-01-01

    An epidemiological model is described in conjunction with the analytical process through which aviation occurrence reports are composed into the events and factors pertinent to it. The model represents a process in which disease, emanating from environmental conditions, manifests itself in symptoms that may lead to fatal illness, recoverable illness, or no illness depending on individual circumstances of patient vulnerability, preventive actions, and intervention. In the aviation system the analogy of the disease process is the predilection for error of human participants. This arises from factors in the operating or physical environment and results in errors of commission or omission that, again depending on the individual circumstances, may lead to accidents, system perturbations, or harmless corrections. A discussion of the previous investigations, each of which manifests the application of the epidemiological method, exemplifies its use and effectiveness.

  19. New paradigm for understanding in-flight decision making errors: a neurophysiological model leveraging human factors.

    PubMed

    Souvestre, P A; Landrock, C K; Blaber, A P

    2008-08-01

    Human factors centered aviation accident analyses report that skill based errors are known to be cause of 80% of all accidents, decision making related errors 30% and perceptual errors 6%1. In-flight decision making error is a long time recognized major avenue leading to incidents and accidents. Through the past three decades, tremendous and costly efforts have been developed to attempt to clarify causation, roles and responsibility as well as to elaborate various preventative and curative countermeasures blending state of the art biomedical, technological advances and psychophysiological training strategies. In-flight related statistics have not been shown significantly changed and a significant number of issues remain not yet resolved. Fine Postural System and its corollary, Postural Deficiency Syndrome (PDS), both defined in the 1980's, are respectively neurophysiological and medical diagnostic models that reflect central neural sensory-motor and cognitive controls regulatory status. They are successfully used in complex neurotraumatology and related rehabilitation for over two decades. Analysis of clinical data taken over a ten-year period from acute and chronic post-traumatic PDS patients shows a strong correlation between symptoms commonly exhibited before, along side, or even after error, and sensory-motor or PDS related symptoms. Examples are given on how PDS related central sensory-motor control dysfunction can be correctly identified and monitored via a neurophysiological ocular-vestibular-postural monitoring system. The data presented provides strong evidence that a specific biomedical assessment methodology can lead to a better understanding of in-flight adaptive neurophysiological, cognitive and perceptual dysfunctional status that could induce in flight-errors. How relevant human factors can be identified and leveraged to maintain optimal performance will be addressed.

  20. Error-associated behaviors and error rates for robotic geology

    NASA Technical Reports Server (NTRS)

    Anderson, Robert C.; Thomas, Geb; Wagner, Jacob; Glasgow, Justin

    2004-01-01

    This study explores human error as a function of the decision-making process. One of many models for human decision-making is Rasmussen's decision ladder [9]. The decision ladder identifies the multiple tasks and states of knowledge involved in decision-making. The tasks and states of knowledge can be classified by the level of cognitive effort required to make the decision, leading to the skill, rule, and knowledge taxonomy (Rasmussen, 1987). Skill based decisions require the least cognitive effort and knowledge based decisions require the greatest cognitive effort. Errors can occur at any of the cognitive levels.

  1. Applying human factors principles to alert design increases efficiency and reduces prescribing errors in a scenario-based simulation

    PubMed Central

    Russ, Alissa L; Zillich, Alan J; Melton, Brittany L; Russell, Scott A; Chen, Siying; Spina, Jeffrey R; Weiner, Michael; Johnson, Elizabette G; Daggy, Joanne K; McManus, M Sue; Hawsey, Jason M; Puleo, Anthony G; Doebbeling, Bradley N; Saleem, Jason J

    2014-01-01

    Objective To apply human factors engineering principles to improve alert interface design. We hypothesized that incorporating human factors principles into alerts would improve usability, reduce workload for prescribers, and reduce prescribing errors. Materials and methods We performed a scenario-based simulation study using a counterbalanced, crossover design with 20 Veterans Affairs prescribers to compare original versus redesigned alerts. We redesigned drug–allergy, drug–drug interaction, and drug–disease alerts based upon human factors principles. We assessed usability (learnability of redesign, efficiency, satisfaction, and usability errors), perceived workload, and prescribing errors. Results Although prescribers received no training on the design changes, prescribers were able to resolve redesigned alerts more efficiently (median (IQR): 56 (47) s) compared to the original alerts (85 (71) s; p=0.015). In addition, prescribers rated redesigned alerts significantly higher than original alerts across several dimensions of satisfaction. Redesigned alerts led to a modest but significant reduction in workload (p=0.042) and significantly reduced the number of prescribing errors per prescriber (median (range): 2 (1–5) compared to original alerts: 4 (1–7); p=0.024). Discussion Aspects of the redesigned alerts that likely contributed to better prescribing include design modifications that reduced usability-related errors, providing clinical data closer to the point of decision, and displaying alert text in a tabular format. Displaying alert text in a tabular format may help prescribers extract information quickly and thereby increase responsiveness to alerts. Conclusions This simulation study provides evidence that applying human factors design principles to medication alerts can improve usability and prescribing outcomes. PMID:24668841

  2. Impact of food and fluid intake on technical and biological measurement error in body composition assessment methods in athletes.

    PubMed

    Kerr, Ava; Slater, Gary J; Byrne, Nuala

    2017-02-01

    Two, three and four compartment (2C, 3C and 4C) models of body composition are popular methods to measure fat mass (FM) and fat-free mass (FFM) in athletes. However, the impact of food and fluid intake on measurement error has not been established. The purpose of this study was to evaluate standardised (overnight fasted, rested and hydrated) v. non-standardised (afternoon and non-fasted) presentation on technical and biological error on surface anthropometry (SA), 2C, 3C and 4C models. In thirty-two athletic males, measures of SA, dual-energy X-ray absorptiometry (DXA), bioelectrical impedance spectroscopy (BIS) and air displacement plethysmography (BOD POD) were taken to establish 2C, 3C and 4C models. Tests were conducted after an overnight fast (duplicate), about 7 h later after ad libitum food and fluid intake, and repeated 24 h later before and after ingestion of a specified meal. Magnitudes of changes in the mean and typical errors of measurement were determined. Mean change scores for non-standardised presentation and post meal tests for FM were substantially large in BIS, SA, 3C and 4C models. For FFM, mean change scores for non-standardised conditions produced large changes for BIS, 3C and 4C models, small for DXA, trivial for BOD POD and SA. Models that included a total body water (TBW) value from BIS (3C and 4C) were more sensitive to TBW changes in non-standardised conditions than 2C models. Biological error is minimised in all models with standardised presentation but DXA and BOD POD are acceptable if acute food and fluid intake remains below 500 g.

  3. Investigating mode errors on automated flight decks: illustrating the problem-driven, cumulative, and interdisciplinary nature of human factors research.

    PubMed

    Sarter, Nadine

    2008-06-01

    The goal of this article is to illustrate the problem-driven, cumulative, and highly interdisciplinary nature of human factors research by providing a brief overview of the work on mode errors on modern flight decks over the past two decades. Mode errors on modem flight decks were first reported in the late 1980s. Poor feedback, inadequate mental models of the automation, and the high degree of coupling and complexity of flight deck systems were identified as main contributors to these breakdowns in human-automation interaction. Various improvements of design, training, and procedures were proposed to address these issues. The author describes when and why the problem of mode errors surfaced, summarizes complementary research activities that helped identify and understand the contributing factors to mode errors, and describes some countermeasures that have been developed in recent years. This brief review illustrates how one particular human factors problem in the aviation domain enabled various disciplines and methodological approaches to contribute to a better understanding of, as well as provide better support for, effective human-automation coordination. Converging operations and interdisciplinary collaboration over an extended period of time are hallmarks of successful human factors research. The reported body of research can serve as a model for future research and as a teaching tool for students in this field of work.

  4. Technology-related medication errors in a tertiary hospital: a 5-year analysis of reported medication incidents.

    PubMed

    Samaranayake, N R; Cheung, S T D; Chui, W C M; Cheung, B M Y

    2012-12-01

    Healthcare technology is meant to reduce medication errors. The objective of this study was to assess unintended errors related to technologies in the medication use process. Medication incidents reported from 2006 to 2010 in a main tertiary care hospital were analysed by a pharmacist and technology-related errors were identified. Technology-related errors were further classified as socio-technical errors and device errors. This analysis was conducted using data from medication incident reports which may represent only a small proportion of medication errors that actually takes place in a hospital. Hence, interpretation of results must be tentative. 1538 medication incidents were reported. 17.1% of all incidents were technology-related, of which only 1.9% were device errors, whereas most were socio-technical errors (98.1%). Of these, 61.2% were linked to computerised prescription order entry, 23.2% to bar-coded patient identification labels, 7.2% to infusion pumps, 6.8% to computer-aided dispensing label generation and 1.5% to other technologies. The immediate causes for technology-related errors included, poor interface between user and computer (68.1%), improper procedures or rule violations (22.1%), poor interface between user and infusion pump (4.9%), technical defects (1.9%) and others (3.0%). In 11.4% of the technology-related incidents, the error was detected after the drug had been administered. A considerable proportion of all incidents were technology-related. Most errors were due to socio-technical issues. Unintended and unanticipated errors may happen when using technologies. Therefore, when using technologies, system improvement, awareness, training and monitoring are needed to minimise medication errors. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  5. FRamework Assessing Notorious Contributing Influences for Error (FRANCIE): Perspective on Taxonomy Development to Support Error Reporting and Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lon N. Haney; David I. Gertman

    2003-04-01

    Beginning in the 1980s a primary focus of human reliability analysis was estimation of human error probabilities. However, detailed qualitative modeling with comprehensive representation of contextual variables often was lacking. This was likely due to the lack of comprehensive error and performance shaping factor taxonomies, and the limited data available on observed error rates and their relationship to specific contextual variables. In the mid 90s Boeing, America West Airlines, NASA Ames Research Center and INEEL partnered in a NASA sponsored Advanced Concepts grant to: assess the state of the art in human error analysis, identify future needs for human errormore » analysis, and develop an approach addressing these needs. Identified needs included the need for a method to identify and prioritize task and contextual characteristics affecting human reliability. Other needs identified included developing comprehensive taxonomies to support detailed qualitative modeling and to structure meaningful data collection efforts across domains. A result was the development of the FRamework Assessing Notorious Contributing Influences for Error (FRANCIE) with a taxonomy for airline maintenance tasks. The assignment of performance shaping factors to generic errors by experts proved to be valuable to qualitative modeling. Performance shaping factors and error types from such detailed approaches can be used to structure error reporting schemes. In a recent NASA Advanced Human Support Technology grant FRANCIE was refined, and two new taxonomies for use on space missions were developed. The development, sharing, and use of error taxonomies, and the refinement of approaches for increased fidelity of qualitative modeling is offered as a means to help direct useful data collection strategies.« less

  6. A Review of Research on Error Detection. Technical Report No. 540.

    ERIC Educational Resources Information Center

    Meyer, Linda A.

    A review was conducted of the research on error detection studies completed with children, adolescents, and young adults to determine at what age children begin to detect errors in texts. The studies were grouped according to the subjects' ages. The focus of the review was on the following aspects of each study: the hypothesis that guided the…

  7. Dopamine Modulates Adaptive Prediction Error Coding in the Human Midbrain and Striatum.

    PubMed

    Diederen, Kelly M J; Ziauddeen, Hisham; Vestergaard, Martin D; Spencer, Tom; Schultz, Wolfram; Fletcher, Paul C

    2017-02-15

    Learning to optimally predict rewards requires agents to account for fluctuations in reward value. Recent work suggests that individuals can efficiently learn about variable rewards through adaptation of the learning rate, and coding of prediction errors relative to reward variability. Such adaptive coding has been linked to midbrain dopamine neurons in nonhuman primates, and evidence in support for a similar role of the dopaminergic system in humans is emerging from fMRI data. Here, we sought to investigate the effect of dopaminergic perturbations on adaptive prediction error coding in humans, using a between-subject, placebo-controlled pharmacological fMRI study with a dopaminergic agonist (bromocriptine) and antagonist (sulpiride). Participants performed a previously validated task in which they predicted the magnitude of upcoming rewards drawn from distributions with varying SDs. After each prediction, participants received a reward, yielding trial-by-trial prediction errors. Under placebo, we replicated previous observations of adaptive coding in the midbrain and ventral striatum. Treatment with sulpiride attenuated adaptive coding in both midbrain and ventral striatum, and was associated with a decrease in performance, whereas bromocriptine did not have a significant impact. Although we observed no differential effect of SD on performance between the groups, computational modeling suggested decreased behavioral adaptation in the sulpiride group. These results suggest that normal dopaminergic function is critical for adaptive prediction error coding, a key property of the brain thought to facilitate efficient learning in variable environments. Crucially, these results also offer potential insights for understanding the impact of disrupted dopamine function in mental illness. SIGNIFICANCE STATEMENT To choose optimally, we have to learn what to expect. Humans dampen learning when there is a great deal of variability in reward outcome, and two brain regions that

  8. Engineering Technical Review Planning Briefing

    NASA Technical Reports Server (NTRS)

    Gardner, Terrie

    2012-01-01

    The general topics covered in the engineering technical planning briefing are 1) overviews of NASA, Marshall Space Flight Center (MSFC), and Engineering, 2) the NASA Systems Engineering(SE) Engine and its implementation , 3) the NASA Project Life Cycle, 4) MSFC Technical Management Branch Services in relation to the SE Engine and the Project Life Cycle , 5) Technical Reviews, 6) NASA Human Factor Design Guidance , and 7) the MSFC Human Factors Team. The engineering technical review portion of the presentation is the primary focus of the overall presentation and will address the definition of a design review, execution guidance, the essential stages of a technical review, and the overall review planning life cycle. Examples of a technical review plan content, review approaches, review schedules, and the review process will be provided and discussed. The human factors portion of the presentation will focus on the NASA guidance for human factors. Human factors definition, categories, design guidance, and human factor specialist roles will be addressed. In addition, the NASA Systems Engineering Engine description, definition, and application will be reviewed as background leading into the NASA Project Life Cycle Overview and technical review planning discussion.

  9. Applying human factors principles to alert design increases efficiency and reduces prescribing errors in a scenario-based simulation.

    PubMed

    Russ, Alissa L; Zillich, Alan J; Melton, Brittany L; Russell, Scott A; Chen, Siying; Spina, Jeffrey R; Weiner, Michael; Johnson, Elizabette G; Daggy, Joanne K; McManus, M Sue; Hawsey, Jason M; Puleo, Anthony G; Doebbeling, Bradley N; Saleem, Jason J

    2014-10-01

    To apply human factors engineering principles to improve alert interface design. We hypothesized that incorporating human factors principles into alerts would improve usability, reduce workload for prescribers, and reduce prescribing errors. We performed a scenario-based simulation study using a counterbalanced, crossover design with 20 Veterans Affairs prescribers to compare original versus redesigned alerts. We redesigned drug-allergy, drug-drug interaction, and drug-disease alerts based upon human factors principles. We assessed usability (learnability of redesign, efficiency, satisfaction, and usability errors), perceived workload, and prescribing errors. Although prescribers received no training on the design changes, prescribers were able to resolve redesigned alerts more efficiently (median (IQR): 56 (47) s) compared to the original alerts (85 (71) s; p=0.015). In addition, prescribers rated redesigned alerts significantly higher than original alerts across several dimensions of satisfaction. Redesigned alerts led to a modest but significant reduction in workload (p=0.042) and significantly reduced the number of prescribing errors per prescriber (median (range): 2 (1-5) compared to original alerts: 4 (1-7); p=0.024). Aspects of the redesigned alerts that likely contributed to better prescribing include design modifications that reduced usability-related errors, providing clinical data closer to the point of decision, and displaying alert text in a tabular format. Displaying alert text in a tabular format may help prescribers extract information quickly and thereby increase responsiveness to alerts. This simulation study provides evidence that applying human factors design principles to medication alerts can improve usability and prescribing outcomes. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  10. Subsecond dopamine fluctuations in human striatum encode superposed error signals about actual and counterfactual reward

    PubMed Central

    Kishida, Kenneth T.; Saez, Ignacio; Lohrenz, Terry; Witcher, Mark R.; Laxton, Adrian W.; Tatter, Stephen B.; White, Jason P.; Ellis, Thomas L.; Phillips, Paul E. M.; Montague, P. Read

    2016-01-01

    In the mammalian brain, dopamine is a critical neuromodulator whose actions underlie learning, decision-making, and behavioral control. Degeneration of dopamine neurons causes Parkinson’s disease, whereas dysregulation of dopamine signaling is believed to contribute to psychiatric conditions such as schizophrenia, addiction, and depression. Experiments in animal models suggest the hypothesis that dopamine release in human striatum encodes reward prediction errors (RPEs) (the difference between actual and expected outcomes) during ongoing decision-making. Blood oxygen level-dependent (BOLD) imaging experiments in humans support the idea that RPEs are tracked in the striatum; however, BOLD measurements cannot be used to infer the action of any one specific neurotransmitter. We monitored dopamine levels with subsecond temporal resolution in humans (n = 17) with Parkinson’s disease while they executed a sequential decision-making task. Participants placed bets and experienced monetary gains or losses. Dopamine fluctuations in the striatum fail to encode RPEs, as anticipated by a large body of work in model organisms. Instead, subsecond dopamine fluctuations encode an integration of RPEs with counterfactual prediction errors, the latter defined by how much better or worse the experienced outcome could have been. How dopamine fluctuations combine the actual and counterfactual is unknown. One possibility is that this process is the normal behavior of reward processing dopamine neurons, which previously had not been tested by experiments in animal models. Alternatively, this superposition of error terms may result from an additional yet-to-be-identified subclass of dopamine neurons. PMID:26598677

  11. Subsecond dopamine fluctuations in human striatum encode superposed error signals about actual and counterfactual reward.

    PubMed

    Kishida, Kenneth T; Saez, Ignacio; Lohrenz, Terry; Witcher, Mark R; Laxton, Adrian W; Tatter, Stephen B; White, Jason P; Ellis, Thomas L; Phillips, Paul E M; Montague, P Read

    2016-01-05

    In the mammalian brain, dopamine is a critical neuromodulator whose actions underlie learning, decision-making, and behavioral control. Degeneration of dopamine neurons causes Parkinson's disease, whereas dysregulation of dopamine signaling is believed to contribute to psychiatric conditions such as schizophrenia, addiction, and depression. Experiments in animal models suggest the hypothesis that dopamine release in human striatum encodes reward prediction errors (RPEs) (the difference between actual and expected outcomes) during ongoing decision-making. Blood oxygen level-dependent (BOLD) imaging experiments in humans support the idea that RPEs are tracked in the striatum; however, BOLD measurements cannot be used to infer the action of any one specific neurotransmitter. We monitored dopamine levels with subsecond temporal resolution in humans (n = 17) with Parkinson's disease while they executed a sequential decision-making task. Participants placed bets and experienced monetary gains or losses. Dopamine fluctuations in the striatum fail to encode RPEs, as anticipated by a large body of work in model organisms. Instead, subsecond dopamine fluctuations encode an integration of RPEs with counterfactual prediction errors, the latter defined by how much better or worse the experienced outcome could have been. How dopamine fluctuations combine the actual and counterfactual is unknown. One possibility is that this process is the normal behavior of reward processing dopamine neurons, which previously had not been tested by experiments in animal models. Alternatively, this superposition of error terms may result from an additional yet-to-be-identified subclass of dopamine neurons.

  12. A system dynamic simulation model for managing the human error in power tools industries

    NASA Astrophysics Data System (ADS)

    Jamil, Jastini Mohd; Shaharanee, Izwan Nizal Mohd

    2017-10-01

    In the era of modern and competitive life of today, every organization will face the situations in which the work does not proceed as planned when there is problems occur in which it had to be delay. However, human error is often cited as the culprit. The error that made by the employees would cause them have to spend additional time to identify and check for the error which in turn could affect the normal operations of the company as well as the company's reputation. Employee is a key element of the organization in running all of the activities of organization. Hence, work performance of the employees is a crucial factor in organizational success. The purpose of this study is to identify the factors that cause the increasing errors make by employees in the organization by using system dynamics approach. The broadly defined targets in this study are employees in the Regional Material Field team from purchasing department in power tools industries. Questionnaires were distributed to the respondents to obtain their perceptions on the root cause of errors make by employees in the company. The system dynamics model was developed to simulate the factor of the increasing errors make by employees and its impact. The findings of this study showed that the increasing of error make by employees was generally caused by the factors of workload, work capacity, job stress, motivation and performance of employees. However, this problem could be solve by increased the number of employees in the organization.

  13. Error-free replicative bypass of (6–4) photoproducts by DNA polymerase ζ in mouse and human cells

    PubMed Central

    Yoon, Jung-Hoon; Prakash, Louise; Prakash, Satya

    2010-01-01

    The ultraviolet (UV)-induced (6–4) pyrimidine–pyrimidone photoproduct [(6–4) PP] confers a large structural distortion in DNA. Here we examine in human cells the roles of translesion synthesis (TLS) DNA polymerases (Pols) in promoting replication through a (6–4) TT photoproduct carried on a duplex plasmid where bidirectional replication initiates from an origin of replication. We show that TLS contributes to a large fraction of lesion bypass and that it is mostly error-free. We find that, whereas Pol η and Pol ι provide alternate pathways for mutagenic TLS, surprisingly, Pol ζ functions independently of these Pols and in a predominantly error-free manner. We verify and extend these observations in mouse cells and conclude that, in human cells, TLS during replication can be markedly error-free even opposite a highly distorting DNA lesion. PMID:20080950

  14. Crew systems: integrating human and technical subsystems for the exploration of space.

    PubMed

    Connors, M M; Harrison, A A; Summit, J

    1994-07-01

    Space exploration missions will require combining human and technical subsystems into overall "crew systems" capable of performing under the rigorous conditions of outer space. This report describes substantive and conceptual relationships among humans, intelligent machines, and communication systems, and explores how these components may be combined to complement and strengthen one another. We identify key research issues in the combination of humans and technology and examine the role of individual differences, group processes, and environmental conditions. We conclude that a crew system is, in effect, a social cyborg, a living system consisting of multiple individuals whose capabilities are extended by advanced technology.

  15. Crew systems: integrating human and technical subsystems for the exploration of space

    NASA Technical Reports Server (NTRS)

    Connors, M. M.; Harrison, A. A.; Summit, J.

    1994-01-01

    Space exploration missions will require combining human and technical subsystems into overall "crew systems" capable of performing under the rigorous conditions of outer space. This report describes substantive and conceptual relationships among humans, intelligent machines, and communication systems, and explores how these components may be combined to complement and strengthen one another. We identify key research issues in the combination of humans and technology and examine the role of individual differences, group processes, and environmental conditions. We conclude that a crew system is, in effect, a social cyborg, a living system consisting of multiple individuals whose capabilities are extended by advanced technology.

  16. Minimizing human error in radiopharmaceutical preparation and administration via a bar code-enhanced nuclear pharmacy management system.

    PubMed

    Hakala, John L; Hung, Joseph C; Mosman, Elton A

    2012-09-01

    The objective of this project was to ensure correct radiopharmaceutical administration through the use of a bar code system that links patient and drug profiles with on-site information management systems. This new combined system would minimize the amount of manual human manipulation, which has proven to be a primary source of error. The most common reason for dosing errors is improper patient identification when a dose is obtained from the nuclear pharmacy or when a dose is administered. A standardized electronic transfer of information from radiopharmaceutical preparation to injection will further reduce the risk of misadministration. Value stream maps showing the flow of the patient dose information, as well as potential points of human error, were developed. Next, a future-state map was created that included proposed corrections for the most common critical sites of error. Transitioning the current process to the future state will require solutions that address these sites. To optimize the future-state process, a bar code system that links the on-site radiology management system with the nuclear pharmacy management system was proposed. A bar-coded wristband connects the patient directly to the electronic information systems. The bar code-enhanced process linking the patient dose with the electronic information reduces the number of crucial points for human error and provides a framework to ensure that the prepared dose reaches the correct patient. Although the proposed flowchart is designed for a site with an in-house central nuclear pharmacy, much of the framework could be applied by nuclear medicine facilities using unit doses. An electronic connection between information management systems to allow the tracking of a radiopharmaceutical from preparation to administration can be a useful tool in preventing the mistakes that are an unfortunate reality for any facility.

  17. Technical documentation challenges in aviation maintenance : a proceedings report.

    DOT National Transportation Integrated Search

    2012-11-01

    The 2012 Technical Documentation workshop addressed both problems and solutions associated with technical : documentation for maintenance. These issues are known to cause errors, rework, maintenance delays, other : safety hazards, and FAA administrat...

  18. Technical Note: Millimeter precision in ultrasound based patient positioning: Experimental quantification of inherent technical limitations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballhausen, Hendrik, E-mail: hendrik.ballhausen@med.uni-muenchen.de; Hieber, Sheila; Li, Minglun

    2014-08-15

    Purpose: To identify the relevant technical sources of error of a system based on three-dimensional ultrasound (3D US) for patient positioning in external beam radiotherapy. To quantify these sources of error in a controlled laboratory setting. To estimate the resulting end-to-end geometric precision of the intramodality protocol. Methods: Two identical free-hand 3D US systems at both the planning-CT and the treatment room were calibrated to the laboratory frame of reference. Every step of the calibration chain was repeated multiple times to estimate its contribution to overall systematic and random error. Optimal margins were computed given the identified and quantified systematicmore » and random errors. Results: In descending order of magnitude, the identified and quantified sources of error were: alignment of calibration phantom to laser marks 0.78 mm, alignment of lasers in treatment vs planning room 0.51 mm, calibration and tracking of 3D US probe 0.49 mm, alignment of stereoscopic infrared camera to calibration phantom 0.03 mm. Under ideal laboratory conditions, these errors are expected to limit ultrasound-based positioning to an accuracy of 1.05 mm radially. Conclusions: The investigated 3D ultrasound system achieves an intramodal accuracy of about 1 mm radially in a controlled laboratory setting. The identified systematic and random errors require an optimal clinical tumor volume to planning target volume margin of about 3 mm. These inherent technical limitations do not prevent clinical use, including hypofractionation or stereotactic body radiation therapy.« less

  19. Human error and commercial aviation accidents: an analysis using the human factors analysis and classification system.

    PubMed

    Shappell, Scott; Detwiler, Cristy; Holcomb, Kali; Hackworth, Carla; Boquet, Albert; Wiegmann, Douglas A

    2007-04-01

    The aim of this study was to extend previous examinations of aviation accidents to include specific aircrew, environmental, supervisory, and organizational factors associated with two types of commercial aviation (air carrier and commuter/ on-demand) accidents using the Human Factors Analysis and Classification System (HFACS). HFACS is a theoretically based tool for investigating and analyzing human error associated with accidents and incidents. Previous research has shown that HFACS can be reliably used to identify human factors trends associated with military and general aviation accidents. Using data obtained from both the National Transportation Safety Board and the Federal Aviation Administration, 6 pilot-raters classified aircrew, supervisory, organizational, and environmental causal factors associated with 1020 commercial aviation accidents that occurred over a 13-year period. The majority of accident causal factors were attributed to aircrew and the environment, with decidedly fewer associated with supervisory and organizational causes. Comparisons were made between HFACS causal categories and traditional situational variables such as visual conditions, injury severity, and regional differences. These data will provide support for the continuation, modification, and/or development of interventions aimed at commercial aviation safety. HFACS provides a tool for assessing human factors associated with accidents and incidents.

  20. Procedural error monitoring and smart checklists

    NASA Technical Reports Server (NTRS)

    Palmer, Everett

    1990-01-01

    Human beings make and usually detect errors routinely. The same mental processes that allow humans to cope with novel problems can also lead to error. Bill Rouse has argued that errors are not inherently bad but their consequences may be. He proposes the development of error-tolerant systems that detect errors and take steps to prevent the consequences of the error from occurring. Research should be done on self and automatic detection of random and unanticipated errors. For self detection, displays should be developed that make the consequences of errors immediately apparent. For example, electronic map displays graphically show the consequences of horizontal flight plan entry errors. Vertical profile displays should be developed to make apparent vertical flight planning errors. Other concepts such as energy circles could also help the crew detect gross flight planning errors. For automatic detection, systems should be developed that can track pilot activity, infer pilot intent and inform the crew of potential errors before their consequences are realized. Systems that perform a reasonableness check on flight plan modifications by checking route length and magnitude of course changes are simple examples. Another example would be a system that checked the aircraft's planned altitude against a data base of world terrain elevations. Information is given in viewgraph form.

  1. Strength conditions for the elastic structures with a stress error

    NASA Astrophysics Data System (ADS)

    Matveev, A. D.

    2017-10-01

    As is known, the constraints (strength conditions) for the safety factor of elastic structures and design details of a particular class, e.g. aviation structures are established, i.e. the safety factor values of such structures should be within the given range. It should be noted that the constraints are set for the safety factors corresponding to analytical (exact) solutions of elasticity problems represented for the structures. Developing the analytical solutions for most structures, especially irregular shape ones, is associated with great difficulties. Approximate approaches to solve the elasticity problems, e.g. the technical theories of deformation of homogeneous and composite plates, beams and shells, are widely used for a great number of structures. Technical theories based on the hypotheses give rise to approximate (technical) solutions with an irreducible error, with the exact value being difficult to be determined. In static calculations of the structural strength with a specified small range for the safety factors application of technical (by the Theory of Strength of Materials) solutions is difficult. However, there are some numerical methods for developing the approximate solutions of elasticity problems with arbitrarily small errors. In present paper, the adjusted reference (specified) strength conditions for the structural safety factor corresponding to approximate solution of the elasticity problem have been proposed. The stress error estimation is taken into account using the proposed strength conditions. It has been shown that, to fulfill the specified strength conditions for the safety factor of the given structure corresponding to an exact solution, the adjusted strength conditions for the structural safety factor corresponding to an approximate solution are required. The stress error estimation which is the basis for developing the adjusted strength conditions has been determined for the specified strength conditions. The adjusted strength

  2. Radiology's Achilles' heel: error and variation in the interpretation of the Röntgen image.

    PubMed

    Robinson, P J

    1997-11-01

    The performance of the human eye and brain has failed to keep pace with the enormous technical progress in the first full century of radiology. Errors and variations in interpretation now represent the weakest aspect of clinical imaging. Those interpretations which differ from the consensus view of a panel of "experts" may be regarded as errors; where experts fail to achieve consensus, differing reports are regarded as "observer variation". Errors arise from poor technique, failures of perception, lack of knowledge and misjudgments. Observer variation is substantial and should be taken into account when different diagnostic methods are compared; in many cases the difference between observers outweighs the difference between techniques. Strategies for reducing error include attention to viewing conditions, training of the observers, availability of previous films and relevant clinical data, dual or multiple reporting, standardization of terminology and report format, and assistance from computers. Digital acquisition and display will probably not affect observer variation but the performance of radiologists, as measured by receiver operating characteristic (ROC) analysis, may be improved by computer-directed search for specific image features. Other current developments show that where image features can be comprehensively described, computer analysis can replace the perception function of the observer, whilst the function of interpretation can in some cases be performed better by artificial neural networks. However, computer-assisted diagnosis is still in its infancy and complete replacement of the human observer is as yet a remote possibility.

  3. Results of a nuclear power plant application of A New Technique for Human Error Analysis (ATHEANA)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitehead, D.W.; Forester, J.A.; Bley, D.C.

    1998-03-01

    A new method to analyze human errors has been demonstrated at a pressurized water reactor (PWR) nuclear power plant. This was the first application of the new method referred to as A Technique for Human Error Analysis (ATHEANA). The main goals of the demonstration were to test the ATHEANA process as described in the frame-of-reference manual and the implementation guideline, test a training package developed for the method, test the hypothesis that plant operators and trainers have significant insight into the error-forcing-contexts (EFCs) that can make unsafe actions (UAs) more likely, and to identify ways to improve the method andmore » its documentation. A set of criteria to evaluate the success of the ATHEANA method as used in the demonstration was identified. A human reliability analysis (HRA) team was formed that consisted of an expert in probabilistic risk assessment (PRA) with some background in HRA (not ATHEANA) and four personnel from the nuclear power plant. Personnel from the plant included two individuals from their PRA staff and two individuals from their training staff. Both individuals from training are currently licensed operators and one of them was a senior reactor operator on shift until a few months before the demonstration. The demonstration was conducted over a 5-month period and was observed by members of the Nuclear Regulatory Commission`s ATHEANA development team, who also served as consultants to the HRA team when necessary. Example results of the demonstration to date, including identified human failure events (HFEs), UAs, and EFCs are discussed. Also addressed is how simulator exercises are used in the ATHEANA demonstration project.« less

  4. On modeling human reliability in space flights - Redundancy and recovery operations

    NASA Astrophysics Data System (ADS)

    Aarset, M.; Wright, J. F.

    The reliability of humans is of paramount importance to the safety of space flight systems. This paper describes why 'back-up' operators might not be the best solution, and in some cases, might even degrade system reliability. The problem associated with human redundancy calls for special treatment in reliability analyses. The concept of Standby Redundancy is adopted, and psychological and mathematical models are introduced to improve the way such problems can be estimated and handled. In the past, human reliability has practically been neglected in most reliability analyses, and, when included, the humans have been modeled as a component and treated numerically the way technical components are. This approach is not wrong in itself, but it may lead to systematic errors if too simple analogies from the technical domain are used in the modeling of human behavior. In this paper redundancy in a man-machine system will be addressed. It will be shown how simplification from the technical domain, when applied to human components of a system, may give non-conservative estimates of system reliability.

  5. Defense of Defense Human Factors Engineering Technical Advisory Group Meeting Summary

    DTIC Science & Technology

    2012-07-01

    Survivability ( Plaga ) • Wright, N; OSD and DSOC Helicopter Seating Studies Zehner, G; An Overview of USAF Anthropometry Plaga , J & Hill; SAFE Association...predictions. – 1230 - 1430 Standardization - 1472H (Poston) – 1230 - 1430 Human Factors in Extreme Environments & SS ( Plaga ) • Ganey, HCN...Classification (Personnel) LT Chris Foster Dr. Hector Acosta System Safety/Health Hazards/ Survivability (SS/HH/Sv) Mr. John Plaga Technical Society

  6. A cognitive taxonomy of medical errors.

    PubMed

    Zhang, Jiajie; Patel, Vimla L; Johnson, Todd R; Shortliffe, Edward H

    2004-06-01

    Propose a cognitive taxonomy of medical errors at the level of individuals and their interactions with technology. Use cognitive theories of human error and human action to develop the theoretical foundations of the taxonomy, develop the structure of the taxonomy, populate the taxonomy with examples of medical error cases, identify cognitive mechanisms for each category of medical error under the taxonomy, and apply the taxonomy to practical problems. Four criteria were used to evaluate the cognitive taxonomy. The taxonomy should be able (1) to categorize major types of errors at the individual level along cognitive dimensions, (2) to associate each type of error with a specific underlying cognitive mechanism, (3) to describe how and explain why a specific error occurs, and (4) to generate intervention strategies for each type of error. The proposed cognitive taxonomy largely satisfies the four criteria at a theoretical and conceptual level. Theoretically, the proposed cognitive taxonomy provides a method to systematically categorize medical errors at the individual level along cognitive dimensions, leads to a better understanding of the underlying cognitive mechanisms of medical errors, and provides a framework that can guide future studies on medical errors. Practically, it provides guidelines for the development of cognitive interventions to decrease medical errors and foundation for the development of medical error reporting system that not only categorizes errors but also identifies problems and helps to generate solutions. To validate this model empirically, we will next be performing systematic experimental studies.

  7. Free-Inertial and Damped-Inertial Navigation Mechanization and Error Equations

    DTIC Science & Technology

    1975-04-18

    AD-A014 356 FREE-INERTIAL AND DAMPED-INERTIAL NAVIGATION MECHANIZATION AND ERROR EQUATIONS Warren G. Heller Analytic Sciences Corporation Prepared...IHI IL JI -J THE ANALYTIC SCIENCES CORPORATION TR-312-1-1 FREE-INERTIAL AND DAMPED-INERTIAL NAViGATION MECHANIZATION AND ERROR EQUATIONS Ap~ril 18...PERIOO COVC/REO Fr-,- 1wer l and Dmped-Inertial Navigation Technical Mechanization and Error Equations 8/20-73 - 8/20/74 S. PjLtFORJ4djNjOjO, REPORT

  8. To Err Is Human; To Structurally Prime from Errors Is Also Human

    ERIC Educational Resources Information Center

    Slevc, L. Robert; Ferreira, Victor S.

    2013-01-01

    Natural language contains disfluencies and errors. Do listeners simply discard information that was clearly produced in error, or can erroneous material persist to affect subsequent processing? Two experiments explored this question using a structural priming paradigm. Speakers described dative-eliciting pictures after hearing prime sentences that…

  9. Review of U.S. Army Unmanned Aerial Systems Accident Reports: Analysis of Human Error Contributions

    DTIC Science & Technology

    2018-03-20

    USAARL Report No. 2018-08 Review of U.S. Army Unmanned Aerial Systems Accident Reports: Analysis of Human Error Contributions By Kathryn A...3 Statistical Analysis Approach ..............................................................................................3 Results...1 Introduction The success of unmanned aerial systems (UAS) operations relies upon a variety of factors, including, but not limited to

  10. Spacecraft and propulsion technician error

    NASA Astrophysics Data System (ADS)

    Schultz, Daniel Clyde

    Commercial aviation and commercial space similarly launch, fly, and land passenger vehicles. Unlike aviation, the U.S. government has not established maintenance policies for commercial space. This study conducted a mixed methods review of 610 U.S. space launches from 1984 through 2011, which included 31 failures. An analysis of the failure causal factors showed that human error accounted for 76% of those failures, which included workmanship error accounting for 29% of the failures. With the imminent future of commercial space travel, the increased potential for the loss of human life demands that changes be made to the standardized procedures, training, and certification to reduce human error and failure rates. Several recommendations were made by this study to the FAA's Office of Commercial Space Transportation, space launch vehicle operators, and maintenance technician schools in an effort to increase the safety of the space transportation passengers.

  11. Technical editing of research reports in biomedical journals.

    PubMed

    Wager, Elizabeth; Middleton, Philippa

    2008-10-08

    Most journals try to improve their articles by technical editing processes such as proof-reading, editing to conform to 'house styles', grammatical conventions and checking accuracy of cited references. Despite the considerable resources devoted to technical editing, we do not know whether it improves the accessibility of biomedical research findings or the utility of articles. This is an update of a Cochrane methodology review first published in 2003. To assess the effects of technical editing on research reports in peer-reviewed biomedical journals, and to assess the level of accuracy of references to these reports. We searched The Cochrane Library Issue 2, 2007; MEDLINE (last searched July 2006); EMBASE (last searched June 2007) and checked relevant articles for further references. We also searched the Internet and contacted researchers and experts in the field. Prospective or retrospective comparative studies of technical editing processes applied to original research articles in biomedical journals, as well as studies of reference accuracy. Two review authors independently assessed each study against the selection criteria and assessed the methodological quality of each study. One review author extracted the data, and the second review author repeated this. We located 32 studies addressing technical editing and 66 surveys of reference accuracy. Only three of the studies were randomised controlled trials. A 'package' of largely unspecified editorial processes applied between acceptance and publication was associated with improved readability in two studies and improved reporting quality in another two studies, while another study showed mixed results after stricter editorial policies were introduced. More intensive editorial processes were associated with fewer errors in abstracts and references. Providing instructions to authors was associated with improved reporting of ethics requirements in one study and fewer errors in references in two studies, but no

  12. Do Errors on Classroom Reading Tasks Slow Growth in Reading? Technical Report No. 404.

    ERIC Educational Resources Information Center

    Anderson, Richard C.; And Others

    A pervasive finding from research on teaching and classroom learning is that a low rate of error on classroom tasks is associated with large year to year gains in achievement, particularly for reading in the primary grades. The finding of a negative relationship between error rate, especially rate of oral reading errors, and gains in reading…

  13. FMEA: a model for reducing medical errors.

    PubMed

    Chiozza, Maria Laura; Ponzetti, Clemente

    2009-06-01

    Patient safety is a management issue, in view of the fact that clinical risk management has become an important part of hospital management. Failure Mode and Effect Analysis (FMEA) is a proactive technique for error detection and reduction, firstly introduced within the aerospace industry in the 1960s. Early applications in the health care industry dating back to the 1990s included critical systems in the development and manufacture of drugs and in the prevention of medication errors in hospitals. In 2008, the Technical Committee of the International Organization for Standardization (ISO), licensed a technical specification for medical laboratories suggesting FMEA as a method for prospective risk analysis of high-risk processes. Here we describe the main steps of the FMEA process and review data available on the application of this technique to laboratory medicine. A significant reduction of the risk priority number (RPN) was obtained when applying FMEA to blood cross-matching, to clinical chemistry analytes, as well as to point-of-care testing (POCT).

  14. APLP2 Regulates Refractive Error and Myopia Development in Mice and Humans

    PubMed Central

    Verhoeven, Virginie J. M.; Hysi, Pirro G.; Wojciechowski, Robert; Singh, Pawan Kumar; Kumar, Ashok; Thinakaran, Gopal; Williams, Cathy

    2015-01-01

    Myopia is the most common vision disorder and the leading cause of visual impairment worldwide. However, gene variants identified to date explain less than 10% of the variance in refractive error, leaving the majority of heritability unexplained (“missing heritability”). Previously, we reported that expression of APLP2 was strongly associated with myopia in a primate model. Here, we found that low-frequency variants near the 5’-end of APLP2 were associated with refractive error in a prospective UK birth cohort (n = 3,819 children; top SNP rs188663068, p = 5.0 × 10−4) and a CREAM consortium panel (n = 45,756 adults; top SNP rs7127037, p = 6.6 × 10−3). These variants showed evidence of differential effect on childhood longitudinal refractive error trajectories depending on time spent reading (gene x time spent reading x age interaction, p = 4.0 × 10−3). Furthermore, Aplp2 knockout mice developed high degrees of hyperopia (+11.5 ± 2.2 D, p < 1.0 × 10−4) compared to both heterozygous (-0.8 ± 2.0 D, p < 1.0 × 10−4) and wild-type (+0.3 ± 2.2 D, p < 1.0 × 10−4) littermates and exhibited a dose-dependent reduction in susceptibility to environmentally induced myopia (F(2, 33) = 191.0, p < 1.0 × 10−4). This phenotype was associated with reduced contrast sensitivity (F(12, 120) = 3.6, p = 1.5 × 10−4) and changes in the electrophysiological properties of retinal amacrine cells, which expressed Aplp2. This work identifies APLP2 as one of the “missing” myopia genes, demonstrating the importance of a low-frequency gene variant in the development of human myopia. It also demonstrates an important role for APLP2 in refractive development in mice and humans, suggesting a high level of evolutionary conservation of the signaling pathways underlying refractive eye development. PMID:26313004

  15. 42 CFR 1005.23 - Harmless error.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 5 2012-10-01 2012-10-01 false Harmless error. 1005.23 Section 1005.23 Public Health OFFICE OF INSPECTOR GENERAL-HEALTH CARE, DEPARTMENT OF HEALTH AND HUMAN SERVICES OIG AUTHORITIES APPEALS OF EXCLUSIONS, CIVIL MONEY PENALTIES AND ASSESSMENTS § 1005.23 Harmless error. No error in either...

  16. 42 CFR 1005.23 - Harmless error.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 5 2014-10-01 2014-10-01 false Harmless error. 1005.23 Section 1005.23 Public Health OFFICE OF INSPECTOR GENERAL-HEALTH CARE, DEPARTMENT OF HEALTH AND HUMAN SERVICES OIG AUTHORITIES APPEALS OF EXCLUSIONS, CIVIL MONEY PENALTIES AND ASSESSMENTS § 1005.23 Harmless error. No error in either...

  17. 42 CFR 1005.23 - Harmless error.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Harmless error. 1005.23 Section 1005.23 Public Health OFFICE OF INSPECTOR GENERAL-HEALTH CARE, DEPARTMENT OF HEALTH AND HUMAN SERVICES OIG AUTHORITIES APPEALS OF EXCLUSIONS, CIVIL MONEY PENALTIES AND ASSESSMENTS § 1005.23 Harmless error. No error in either...

  18. 42 CFR 1005.23 - Harmless error.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 5 2013-10-01 2013-10-01 false Harmless error. 1005.23 Section 1005.23 Public Health OFFICE OF INSPECTOR GENERAL-HEALTH CARE, DEPARTMENT OF HEALTH AND HUMAN SERVICES OIG AUTHORITIES APPEALS OF EXCLUSIONS, CIVIL MONEY PENALTIES AND ASSESSMENTS § 1005.23 Harmless error. No error in either...

  19. 42 CFR 1005.23 - Harmless error.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 5 2011-10-01 2011-10-01 false Harmless error. 1005.23 Section 1005.23 Public Health OFFICE OF INSPECTOR GENERAL-HEALTH CARE, DEPARTMENT OF HEALTH AND HUMAN SERVICES OIG AUTHORITIES APPEALS OF EXCLUSIONS, CIVIL MONEY PENALTIES AND ASSESSMENTS § 1005.23 Harmless error. No error in either...

  20. 42 CFR 3.552 - Harmless error.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Harmless error. 3.552 Section 3.552 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL PROVISIONS PATIENT SAFETY ORGANIZATIONS AND PATIENT SAFETY WORK PRODUCT Enforcement Program § 3.552 Harmless error. No error in either the...

  1. Development of an errorable car-following driver model

    NASA Astrophysics Data System (ADS)

    Yang, H.-H.; Peng, H.

    2010-06-01

    An errorable car-following driver model is presented in this paper. An errorable driver model is one that emulates human driver's functions and can generate both nominal (error-free), as well as devious (with error) behaviours. This model was developed for evaluation and design of active safety systems. The car-following data used for developing and validating the model were obtained from a large-scale naturalistic driving database. The stochastic car-following behaviour was first analysed and modelled as a random process. Three error-inducing behaviours were then introduced. First, human perceptual limitation was studied and implemented. Distraction due to non-driving tasks was then identified based on the statistical analysis of the driving data. Finally, time delay of human drivers was estimated through a recursive least-square identification process. By including these three error-inducing behaviours, rear-end collisions with the lead vehicle could occur. The simulated crash rate was found to be similar but somewhat higher than that reported in traffic statistics.

  2. FAO-OIE-WHO Joint Technical Consultation on Avian Influenza at the Human-Animal Interface.

    PubMed

    Anderson, Tara; Capua, Ilaria; Dauphin, Gwenaëlle; Donis, Ruben; Fouchier, Ron; Mumford, Elizabeth; Peiris, Malik; Swayne, David; Thiermann, Alex

    2010-05-01

    For the past 10 years, animal health experts and human health experts have been gaining experience in the technical aspects of avian influenza in mostly separate fora. More recently, in 2006, in a meeting of the small WHO Working Group on Influenza Research at the Human Animal Interface (Meeting report available from: http://www.who.int/csr/resources/publications/influenza/WHO_CDS_EPR_GIP_2006_3/en/index.html) in Geneva allowed influenza experts from the animal and public health sectors to discuss together the most recent avian influenza research. Ad hoc bilateral discussions on specific technical issues as well as formal meetings such as the Technical Meeting on HPAI and Human H5N1 Infection (Rome, June, 2007; information available from: http://www.fao.org/avianflu/en/conferences/june2007/index.html) have increasingly brought the sectors together and broadened the understanding of the topics of concern to each sector. The sectors have also recently come together at the broad global level, and have developed a joint strategy document for working together on zoonotic diseases (Joint strategy available from: ftp://ftp.fao.org/docrep/fao/011/ajl37e/ajl37e00.pdf). The 2008 FAO-OIE-WHO Joint Technical Consultation on Avian Influenza at the Human Animal Interface described here was the first opportunity for a large group of influenza experts from the animal and public health sectors to gather and discuss purely technical topics of joint interest that exist at the human-animal interface. During the consultation, three influenza-specific sessions aimed to (1) identify virological characteristics of avian influenza viruses (AIVs) important for zoonotic and pandemic disease, (2) evaluate the factors affecting evolution and emergence of a pandemic influenza strain and identify existing monitoring systems, and (3) identify modes of transmission and exposure sources for human zoonotic influenza infection (including discussion of specific exposure risks by affected countries). A

  3. Human System Simulation in Support of Human Performance Technical Basis at NPPs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    David Gertman; Katya Le Blanc; alan mecham

    2010-06-01

    This paper focuses on strategies and progress toward establishing the Idaho National Laboratory’s (INL’s) Human Systems Simulator Laboratory at the Center for Advanced Energy Studies (CAES), a consortium of Idaho State Universities. The INL is one of the National Laboratories of the US Department of Energy. One of the first planned applications for the Human Systems Simulator Laboratory is implementation of a dynamic nuclear power plant simulation (NPP) where studies of operator workload, situation awareness, performance and preference will be carried out in simulated control rooms including nuclear power plant control rooms. Simulation offers a means by which to reviewmore » operational concepts, improve design practices and provide a technical basis for licensing decisions. In preparation for the next generation power plant and current government and industry efforts in support of light water reactor sustainability, human operators will be attached to a suite of physiological measurement instruments and, in combination with traditional Human Factors Measurement techniques, carry out control room tasks in simulated advanced digital and hybrid analog/digital control rooms. The current focus of the Human Systems Simulator Laboratory is building core competence in quantitative and qualitative measurements of situation awareness and workload. Of particular interest is whether introduction of digital systems including automated procedures has the potential to reduce workload and enhance safety while improving situation awareness or whether workload is merely shifted and situation awareness is modified in yet to be determined ways. Data analysis is carried out by engineers and scientists and includes measures of the physical and neurological correlates of human performance. The current approach supports a user-centered design philosophy (see ISO 13407 “Human Centered Design Process for Interactive Systems, 1999) wherein the context for task performance along

  4. Evaluating the Performance Diagnostic Checklist-Human Services to Assess Incorrect Error-Correction Procedures by Preschool Paraprofessionals

    ERIC Educational Resources Information Center

    Bowe, Melissa; Sellers, Tyra P.

    2018-01-01

    The Performance Diagnostic Checklist-Human Services (PDC-HS) has been used to assess variables contributing to undesirable staff performance. In this study, three preschool teachers completed the PDC-HS to identify the factors contributing to four paraprofessionals' inaccurate implementation of error-correction procedures during discrete trial…

  5. Decreasing the occurrence of intraoperative technical errors through periodic simple show, tell and learn method.

    PubMed

    Steinberg, Ely L; Amar, Eyal; Albagli, Assaf; Rath, Ehud; Salai, Moshe

    2014-08-01

    Technical errors (TE) that occur during surgery for treating fractures are considered as being preventable by good preoperative planning and surgeon education. This prospective study evaluated a new instructional method for improving surgical outcomes that involved assessing surgeons' own recent performances. Postoperative radiographs from two groups of patients were assessed during consecutive 4-month periods. 350 operations were included in the Early Group and 411 operations in the Late Group. All the TE that occurred during the first period were reviewed and discussed among the residents and the consultant surgeons who had performed those operations. The same procedure was followed 4 months later. The TE were classified as minor, moderate and major. The two groups included the same 41 surgeons. The most common TE were: insufficient reduction, varus and valgus malalignment and prominent hardware. The total number of errors dropped significantly, from 52 (14.7%) during the first period to 26 (6.3%) during the second period (p = 0.0003). The TE score severity dropped from 81 to 38, respectively (p = 0.0001). The most affected regions were, the humerus (p < 000.1), midshaft femur (p = 0.007), proximal femur (p = 0.004) and radius (p = 0.008). Most of the gains were made in the moderate category (p = 0.0001). The consultants performed statistically better than the residents in the first period (12% vs. 20%, p = 0.036), but almost similar to the residents in the second period (5.3% vs. 9%, p = 0.164). A TE index was calculated by dividing the accumulated sum by the number of operations and it dropped in both groups from 0.2 and 0.3 to 0.09 and 0.09, respectively. Intraoperative TE can be significantly reduced by periodic performance evaluations in a seminar setting during which groups of surgeons can review the TE that they and their colleagues had made during recent orthopaedic surgical procedures. Level II. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. A Unified Approach to Measurement Error and Missing Data: Details and Extensions

    ERIC Educational Resources Information Center

    Blackwell, Matthew; Honaker, James; King, Gary

    2017-01-01

    We extend a unified and easy-to-use approach to measurement error and missing data. In our companion article, Blackwell, Honaker, and King give an intuitive overview of the new technique, along with practical suggestions and empirical applications. Here, we offer more precise technical details, more sophisticated measurement error model…

  7. Exploration studies technical report, FY1988 status. Volume 1: Technical summary

    NASA Technical Reports Server (NTRS)

    1988-01-01

    The Office of Exploration (OEXP) at NASA Headquarters has been tasked with defining and recommending alternatives for an early 1990's nationaL decision on a focused program of human exploration of the solar system. The Mission Analysis and System Engineering (MASE) group, which is managed by the Exploration Studies Office at the Lyndon B. Johnson Space Center, is responsible for coordinating the technical studies necessary for accomplishing such a task. This technical report, produced by the MASE, describes the process that has been developed in a case study approach. The four case studies developed in FY88 include: (1) Human Expedition to Phobos; (2) Human Expedition to Mars; (3) Lunar Observatory; and (4) Lunar Outpost to Early Mars Evolution. The final outcome of this effort is a set of programmatic and technical conclusions and recommendations for the following year's work.

  8. On Space Exploration and Human Error: A Paper on Reliability and Safety

    NASA Technical Reports Server (NTRS)

    Bell, David G.; Maluf, David A.; Gawdiak, Yuri

    2005-01-01

    NASA space exploration should largely address a problem class in reliability and risk management stemming primarily from human error, system risk and multi-objective trade-off analysis, by conducting research into system complexity, risk characterization and modeling, and system reasoning. In general, in every mission we can distinguish risk in three possible ways: a) known-known, b) known-unknown, and c) unknown-unknown. It is probably almost certain that space exploration will partially experience similar known or unknown risks embedded in the Apollo missions, Shuttle or Station unless something alters how NASA will perceive and manage safety and reliability

  9. Modeling and Quantification of Team Performance in Human Reliability Analysis for Probabilistic Risk Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeffrey C. JOe; Ronald L. Boring

    Probabilistic Risk Assessment (PRA) and Human Reliability Assessment (HRA) are important technical contributors to the United States (U.S.) Nuclear Regulatory Commission’s (NRC) risk-informed and performance based approach to regulating U.S. commercial nuclear activities. Furthermore, all currently operating commercial NPPs in the U.S. are required by federal regulation to be staffed with crews of operators. Yet, aspects of team performance are underspecified in most HRA methods that are widely used in the nuclear industry. There are a variety of "emergent" team cognition and teamwork errors (e.g., communication errors) that are 1) distinct from individual human errors, and 2) important to understandmore » from a PRA perspective. The lack of robust models or quantification of team performance is an issue that affects the accuracy and validity of HRA methods and models, leading to significant uncertainty in estimating HEPs. This paper describes research that has the objective to model and quantify team dynamics and teamwork within NPP control room crews for risk informed applications, thereby improving the technical basis of HRA, which improves the risk-informed approach the NRC uses to regulate the U.S. commercial nuclear industry.« less

  10. The Effect of Age, Accommodation and Refractive Error on the Adult Human Eye

    PubMed Central

    Richdale, Kathryn; Bullimore, Mark A.; Sinnott, Loraine T.; Zadnik, Karla

    2015-01-01

    Purpose To quantify changes in ocular dimensions associated with age, refractive error, and accommodative response, in vivo, in 30- to 50-year-old human subjects. Methods The right eyes of 91 adults were examined using ultrasonography, phakometry, keratometry, pachymetry, interferometry, anterior segment optical coherence tomography, and high resolution magnetic resonance imaging. Accommodation was measured subjectively with a push-up test and objectively using open-field autorefraction. Regression analyses were used to assess differences in ocular parameters with age, refractive error and accommodation. Results With age, crystalline lens thickness increased (0.03 mm/yr), anterior lens curvature steepened (0.11 mm/yr), anterior chamber depth decreased (0.02 mm/y) and lens equivalent refractive index decreased (0.001 /y) (all p < 0.01). With increasing myopia, there were significant increases in axial length (0.37 mm/D), vitreous chamber depth (0.34 mm/D), vitreous chamber height (0.09 mm/D) and ciliary muscle ring diameter (0.10 mm/D) (all p < 0.05). Increasing myopia was also associated with steepening of both the cornea (0.16 mm/D) and anterior lens surface (0.011 mm/D) (both p < 0.04). With accommodation, the ciliary muscle ring diameter decreased (0.08 mm/D), and the muscle thinned posteriorly (0.008 mm/D), allowing the lens to shorten equatorially (0.07 mm/D) and thicken axially (0.06 mm/D) (all p < 0.03). Conclusions Refractive error is significantly correlated with not only the axial dimensions, but the anterior equatorial dimension of the adult eye. Further testing and development of accommodating intraocular lenses should account for differences in patients’ preoperative refractive error. PMID:26703933

  11. Refractive error characteristics of early and advanced presbyopic individuals.

    DOT National Transportation Integrated Search

    1977-07-01

    The frequency and distribution of ocular refractive errors among middle-aged and older people were obtained from a nonclinical population holding a variety of blue-collar, clerical, and technical jobs. The 422 individuals ranged in age from 35 to 69 ...

  12. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 1 2013-10-01 2013-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...

  13. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 1 2014-10-01 2014-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare Department of Health and Human Services GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...

  14. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 45 Public Welfare 1 2012-10-01 2012-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...

  15. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 1 2011-10-01 2011-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...

  16. Eliminating US hospital medical errors.

    PubMed

    Kumar, Sameer; Steinebach, Marc

    2008-01-01

    Healthcare costs in the USA have continued to rise steadily since the 1980s. Medical errors are one of the major causes of deaths and injuries of thousands of patients every year, contributing to soaring healthcare costs. The purpose of this study is to examine what has been done to deal with the medical-error problem in the last two decades and present a closed-loop mistake-proof operation system for surgery processes that would likely eliminate preventable medical errors. The design method used is a combination of creating a service blueprint, implementing the six sigma DMAIC cycle, developing cause-and-effect diagrams as well as devising poka-yokes in order to develop a robust surgery operation process for a typical US hospital. In the improve phase of the six sigma DMAIC cycle, a number of poka-yoke techniques are introduced to prevent typical medical errors (identified through cause-and-effect diagrams) that may occur in surgery operation processes in US hospitals. It is the authors' assertion that implementing the new service blueprint along with the poka-yokes, will likely result in the current medical error rate to significantly improve to the six-sigma level. Additionally, designing as many redundancies as possible in the delivery of care will help reduce medical errors. Primary healthcare providers should strongly consider investing in adequate doctor and nurse staffing, and improving their education related to the quality of service delivery to minimize clinical errors. This will lead to an increase in higher fixed costs, especially in the shorter time frame. This paper focuses additional attention needed to make a sound technical and business case for implementing six sigma tools to eliminate medical errors that will enable hospital managers to increase their hospital's profitability in the long run and also ensure patient safety.

  17. Analysis of host response to bacterial infection using error model based gene expression microarray experiments

    PubMed Central

    Stekel, Dov J.; Sarti, Donatella; Trevino, Victor; Zhang, Lihong; Salmon, Mike; Buckley, Chris D.; Stevens, Mark; Pallen, Mark J.; Penn, Charles; Falciani, Francesco

    2005-01-01

    A key step in the analysis of microarray data is the selection of genes that are differentially expressed. Ideally, such experiments should be properly replicated in order to infer both technical and biological variability, and the data should be subjected to rigorous hypothesis tests to identify the differentially expressed genes. However, in microarray experiments involving the analysis of very large numbers of biological samples, replication is not always practical. Therefore, there is a need for a method to select differentially expressed genes in a rational way from insufficiently replicated data. In this paper, we describe a simple method that uses bootstrapping to generate an error model from a replicated pilot study that can be used to identify differentially expressed genes in subsequent large-scale studies on the same platform, but in which there may be no replicated arrays. The method builds a stratified error model that includes array-to-array variability, feature-to-feature variability and the dependence of error on signal intensity. We apply this model to the characterization of the host response in a model of bacterial infection of human intestinal epithelial cells. We demonstrate the effectiveness of error model based microarray experiments and propose this as a general strategy for a microarray-based screening of large collections of biological samples. PMID:15800204

  18. Application of human reliability analysis to nursing errors in hospitals.

    PubMed

    Inoue, Kayoko; Koizumi, Akio

    2004-12-01

    Adverse events in hospitals, such as in surgery, anesthesia, radiology, intensive care, internal medicine, and pharmacy, are of worldwide concern and it is important, therefore, to learn from such incidents. There are currently no appropriate tools based on state-of-the art models available for the analysis of large bodies of medical incident reports. In this study, a new model was developed to facilitate medical error analysis in combination with quantitative risk assessment. This model enables detection of the organizational factors that underlie medical errors, and the expedition of decision making in terms of necessary action. Furthermore, it determines medical tasks as module practices and uses a unique coding system to describe incidents. This coding system has seven vectors for error classification: patient category, working shift, module practice, linkage chain (error type, direct threat, and indirect threat), medication, severity, and potential hazard. Such mathematical formulation permitted us to derive two parameters: error rates for module practices and weights for the aforementioned seven elements. The error rate of each module practice was calculated by dividing the annual number of incident reports of each module practice by the annual number of the corresponding module practice. The weight of a given element was calculated by the summation of incident report error rates for an element of interest. This model was applied specifically to nursing practices in six hospitals over a year; 5,339 incident reports with a total of 63,294,144 module practices conducted were analyzed. Quality assurance (QA) of our model was introduced by checking the records of quantities of practices and reproducibility of analysis of medical incident reports. For both items, QA guaranteed legitimacy of our model. Error rates for all module practices were approximately of the order 10(-4) in all hospitals. Three major organizational factors were found to underlie medical

  19. Medication errors: definitions and classification

    PubMed Central

    Aronson, Jeffrey K

    2009-01-01

    To understand medication errors and to identify preventive strategies, we need to classify them and define the terms that describe them. The four main approaches to defining technical terms consider etymology, usage, previous definitions, and the Ramsey–Lewis method (based on an understanding of theory and practice). A medication error is ‘a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient’. Prescribing faults, a subset of medication errors, should be distinguished from prescription errors. A prescribing fault is ‘a failure in the prescribing [decision-making] process that leads to, or has the potential to lead to, harm to the patient’. The converse of this, ‘balanced prescribing’ is ‘the use of a medicine that is appropriate to the patient's condition and, within the limits created by the uncertainty that attends therapeutic decisions, in a dosage regimen that optimizes the balance of benefit to harm’. This excludes all forms of prescribing faults, such as irrational, inappropriate, and ineffective prescribing, underprescribing and overprescribing. A prescription error is ‘a failure in the prescription writing process that results in a wrong instruction about one or more of the normal features of a prescription’. The ‘normal features’ include the identity of the recipient, the identity of the drug, the formulation, dose, route, timing, frequency, and duration of administration. Medication errors can be classified, invoking psychological theory, as knowledge-based mistakes, rule-based mistakes, action-based slips, and memory-based lapses. This classification informs preventive strategies. PMID:19594526

  20. Dopamine reward prediction error coding.

    PubMed

    Schultz, Wolfram

    2016-03-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.

  1. Dopamine reward prediction error coding

    PubMed Central

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards—an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware. PMID:27069377

  2. Understanding Human Error in Naval Aviation Mishaps.

    PubMed

    Miranda, Andrew T

    2018-04-01

    To better understand the external factors that influence the performance and decisions of aviators involved in Naval aviation mishaps. Mishaps in complex activities, ranging from aviation to nuclear power operations, are often the result of interactions between multiple components within an organization. The Naval aviation mishap database contains relevant information, both in quantitative statistics and qualitative reports, that permits analysis of such interactions to identify how the working atmosphere influences aviator performance and judgment. Results from 95 severe Naval aviation mishaps that occurred from 2011 through 2016 were analyzed using Bayes' theorem probability formula. Then a content analysis was performed on a subset of relevant mishap reports. Out of the 14 latent factors analyzed, the Bayes' application identified 6 that impacted specific aspects of aviator behavior during mishaps. Technological environment, misperceptions, and mental awareness impacted basic aviation skills. The remaining 3 factors were used to inform a content analysis of the contextual information within mishap reports. Teamwork failures were the result of plan continuation aggravated by diffused responsibility. Resource limitations and risk management deficiencies impacted judgments made by squadron commanders. The application of Bayes' theorem to historical mishap data revealed the role of latent factors within Naval aviation mishaps. Teamwork failures were seen to be considerably damaging to both aviator skill and judgment. Both the methods and findings have direct application for organizations interested in understanding the relationships between external factors and human error. It presents real-world evidence to promote effective safety decisions.

  3. Intrinsic interactive reinforcement learning - Using error-related potentials for real world human-robot interaction.

    PubMed

    Kim, Su Kyoung; Kirchner, Elsa Andrea; Stefes, Arne; Kirchner, Frank

    2017-12-14

    Reinforcement learning (RL) enables robots to learn its optimal behavioral strategy in dynamic environments based on feedback. Explicit human feedback during robot RL is advantageous, since an explicit reward function can be easily adapted. However, it is very demanding and tiresome for a human to continuously and explicitly generate feedback. Therefore, the development of implicit approaches is of high relevance. In this paper, we used an error-related potential (ErrP), an event-related activity in the human electroencephalogram (EEG), as an intrinsically generated implicit feedback (rewards) for RL. Initially we validated our approach with seven subjects in a simulated robot learning scenario. ErrPs were detected online in single trial with a balanced accuracy (bACC) of 91%, which was sufficient to learn to recognize gestures and the correct mapping between human gestures and robot actions in parallel. Finally, we validated our approach in a real robot scenario, in which seven subjects freely chose gestures and the real robot correctly learned the mapping between gestures and actions (ErrP detection (90% bACC)). In this paper, we demonstrated that intrinsically generated EEG-based human feedback in RL can successfully be used to implicitly improve gesture-based robot control during human-robot interaction. We call our approach intrinsic interactive RL.

  4. The application of SHERPA (Systematic Human Error Reduction and Prediction Approach) in the development of compensatory cognitive rehabilitation strategies for stroke patients with left and right brain damage.

    PubMed

    Hughes, Charmayne M L; Baber, Chris; Bienkiewicz, Marta; Worthington, Andrew; Hazell, Alexa; Hermsdörfer, Joachim

    2015-01-01

    Approximately 33% of stroke patients have difficulty performing activities of daily living, often committing errors during the planning and execution of such activities. The objective of this study was to evaluate the ability of the human error identification (HEI) technique SHERPA (Systematic Human Error Reduction and Prediction Approach) to predict errors during the performance of daily activities in stroke patients with left and right hemisphere lesions. Using SHERPA we successfully predicted 36 of the 38 observed errors, with analysis indicating that the proportion of predicted and observed errors was similar for all sub-tasks and severity levels. HEI results were used to develop compensatory cognitive strategies that clinicians could employ to reduce or prevent errors from occurring. This study provides evidence for the reliability and validity of SHERPA in the design of cognitive rehabilitation strategies in stroke populations.

  5. Addressing Common Student Technical Errors in Field Data Collection: An Analysis of a Citizen-Science Monitoring Project.

    PubMed

    Philippoff, Joanna; Baumgartner, Erin

    2016-03-01

    The scientific value of citizen-science programs is limited when the data gathered are inconsistent, erroneous, or otherwise unusable. Long-term monitoring studies, such as Our Project In Hawai'i's Intertidal (OPIHI), have clear and consistent procedures and are thus a good model for evaluating the quality of participant data. The purpose of this study was to examine the kinds of errors made by student researchers during OPIHI data collection and factors that increase or decrease the likelihood of these errors. Twenty-four different types of errors were grouped into four broad error categories: missing data, sloppiness, methodological errors, and misidentification errors. "Sloppiness" was the most prevalent error type. Error rates decreased with field trip experience and student age. We suggest strategies to reduce data collection errors applicable to many types of citizen-science projects including emphasizing neat data collection, explicitly addressing and discussing the problems of falsifying data, emphasizing the importance of using standard scientific vocabulary, and giving participants multiple opportunities to practice to build their data collection techniques and skills.

  6. Are "Human Factors" the Primary Cause of Complications in the Field of Implant Dentistry?

    PubMed

    Renouard, Franck; Amalberti, René; Renouard, Erell

    Complications in medicine and dentistry are usually analyzed from a purely technical point of view. Rarely is the role of human behavior or judgment considered as a reason for adverse outcomes. When the role of human factors is considered, these are usually described in general terms rather than specifically identifying the factors responsible for an adverse event. The impact of cognitive and behavioral factors in the explanation of adverse events has been studied in other high-stakes areas such as aviation and nuclear power. Specific protocols have been developed to reduce rates of human error, and, where human error is unavoidable, to lessen its impact. This approach has dramatically reduced the incidence of accidents in these fields. This article aims to review how a similar approach may prove valuable in the reduction of complications in implant dentistry.

  7. Target Uncertainty Mediates Sensorimotor Error Correction.

    PubMed

    Acerbi, Luigi; Vijayakumar, Sethu; Wolpert, Daniel M

    2017-01-01

    Human movements are prone to errors that arise from inaccuracies in both our perceptual processing and execution of motor commands. We can reduce such errors by both improving our estimates of the state of the world and through online error correction of the ongoing action. Two prominent frameworks that explain how humans solve these problems are Bayesian estimation and stochastic optimal feedback control. Here we examine the interaction between estimation and control by asking if uncertainty in estimates affects how subjects correct for errors that may arise during the movement. Unbeknownst to participants, we randomly shifted the visual feedback of their finger position as they reached to indicate the center of mass of an object. Even though participants were given ample time to compensate for this perturbation, they only fully corrected for the induced error on trials with low uncertainty about center of mass, with correction only partial in trials involving more uncertainty. The analysis of subjects' scores revealed that participants corrected for errors just enough to avoid significant decrease in their overall scores, in agreement with the minimal intervention principle of optimal feedback control. We explain this behavior with a term in the loss function that accounts for the additional effort of adjusting one's response. By suggesting that subjects' decision uncertainty, as reflected in their posterior distribution, is a major factor in determining how their sensorimotor system responds to error, our findings support theoretical models in which the decision making and control processes are fully integrated.

  8. Recognizing and managing errors of cognitive underspecification.

    PubMed

    Duthie, Elizabeth A

    2014-03-01

    James Reason describes cognitive underspecification as incomplete communication that creates a knowledge gap. Errors occur when an information mismatch occurs in bridging that gap with a resulting lack of shared mental models during the communication process. There is a paucity of studies in health care examining this cognitive error and the role it plays in patient harm. The goal of the following case analyses is to facilitate accurate recognition, identify how it contributes to patient harm, and suggest appropriate management strategies. Reason's human error theory is applied in case analyses of errors of cognitive underspecification. Sidney Dekker's theory of human incident investigation is applied to event investigation to facilitate identification of this little recognized error. Contributory factors leading to errors of cognitive underspecification include workload demands, interruptions, inexperienced practitioners, and lack of a shared mental model. Detecting errors of cognitive underspecification relies on blame-free listening and timely incident investigation. Strategies for interception include two-way interactive communication, standardization of communication processes, and technological support to ensure timely access to documented clinical information. Although errors of cognitive underspecification arise at the sharp end with the care provider, effective management is dependent upon system redesign that mitigates the latent contributory factors. Cognitive underspecification is ubiquitous whenever communication occurs. Accurate identification is essential if effective system redesign is to occur.

  9. Joint Estimation of Contamination, Error and Demography for Nuclear DNA from Ancient Humans

    PubMed Central

    Slatkin, Montgomery

    2016-01-01

    When sequencing an ancient DNA sample from a hominin fossil, DNA from present-day humans involved in excavation and extraction will be sequenced along with the endogenous material. This type of contamination is problematic for downstream analyses as it will introduce a bias towards the population of the contaminating individual(s). Quantifying the extent of contamination is a crucial step as it allows researchers to account for possible biases that may arise in downstream genetic analyses. Here, we present an MCMC algorithm to co-estimate the contamination rate, sequencing error rate and demographic parameters—including drift times and admixture rates—for an ancient nuclear genome obtained from human remains, when the putative contaminating DNA comes from present-day humans. We assume we have a large panel representing the putative contaminant population (e.g. European, East Asian or African). The method is implemented in a C++ program called ‘Demographic Inference with Contamination and Error’ (DICE). We applied it to simulations and genome data from ancient Neanderthals and modern humans. With reasonable levels of genome sequence coverage (>3X), we find we can recover accurate estimates of all these parameters, even when the contamination rate is as high as 50%. PMID:27049965

  10. #2 - An Empirical Assessment of Exposure Measurement Error ...

    EPA Pesticide Factsheets

    Background• Differing degrees of exposure error acrosspollutants• Previous focus on quantifying and accounting forexposure error in single-pollutant models• Examine exposure errors for multiple pollutantsand provide insights on the potential for bias andattenuation of effect estimates in single and bipollutantepidemiological models The National Exposure Research Laboratory (NERL) Human Exposure and Atmospheric Sciences Division (HEASD) conducts research in support of EPA mission to protect human health and the environment. HEASD research program supports Goal 1 (Clean Air) and Goal 4 (Healthy People) of EPA strategic plan. More specifically, our division conducts research to characterize the movement of pollutants from the source to contact with humans. Our multidisciplinary research program produces Methods, Measurements, and Models to identify relationships between and characterize processes that link source emissions, environmental concentrations, human exposures, and target-tissue dose. The impact of these tools is improved regulatory programs and policies for EPA.

  11. [Effect of Mn(II) on the error-prone DNA polymerase iota activity in extracts from human normal and tumor cells].

    PubMed

    Lakhin, A V; Efremova, A S; Makarova, I V; Grishina, E E; Shram, S I; Tarantul, V Z; Gening, L V

    2013-01-01

    The DNA polymerase iota (Pol iota), which has some peculiar features and is characterized by an extremely error-prone DNA synthesis, belongs to the group of enzymes preferentially activated by Mn2+ instead of Mg2+. In this work, the effect of Mn2+ on DNA synthesis in cell extracts from a) normal human and murine tissues, b) human tumor (uveal melanoma), and c) cultured human tumor cell lines SKOV-3 and HL-60 was tested. Each group displayed characteristic features of Mn-dependent DNA synthesis. The changes in the Mn-dependent DNA synthesis caused by malignant transformation of normal tissues are described. It was also shown that the error-prone DNA synthesis catalyzed by Pol iota in extracts of all cell types was efficiently suppressed by an RNA aptamer (IKL5) against Pol iota obtained in our work earlier. The obtained results suggest that IKL5 might be used to suppress the enhanced activity of Pol iota in tumor cells.

  12. Designing to Control Flight Crew Errors

    NASA Technical Reports Server (NTRS)

    Schutte, Paul C.; Willshire, Kelli F.

    1997-01-01

    It is widely accepted that human error is a major contributing factor in aircraft accidents. There has been a significant amount of research in why these errors occurred, and many reports state that the design of flight deck can actually dispose humans to err. This research has led to the call for changes in design according to human factors and human-centered principles. The National Aeronautics and Space Administration's (NASA) Langley Research Center has initiated an effort to design a human-centered flight deck from a clean slate (i.e., without constraints of existing designs.) The effort will be based on recent research in human-centered design philosophy and mission management categories. This design will match the human's model of the mission and function of the aircraft to reduce unnatural or non-intuitive interfaces. The product of this effort will be a flight deck design description, including training and procedures, and a cross reference or paper trail back to design hypotheses, and an evaluation of the design. The present paper will discuss the philosophy, process, and status of this design effort.

  13. Classification and reduction of pilot error

    NASA Technical Reports Server (NTRS)

    Rogers, W. H.; Logan, A. L.; Boley, G. D.

    1989-01-01

    Human error is a primary or contributing factor in about two-thirds of commercial aviation accidents worldwide. With the ultimate goal of reducing pilot error accidents, this contract effort is aimed at understanding the factors underlying error events and reducing the probability of certain types of errors by modifying underlying factors such as flight deck design and procedures. A review of the literature relevant to error classification was conducted. Classification includes categorizing types of errors, the information processing mechanisms and factors underlying them, and identifying factor-mechanism-error relationships. The classification scheme developed by Jens Rasmussen was adopted because it provided a comprehensive yet basic error classification shell or structure that could easily accommodate addition of details on domain-specific factors. For these purposes, factors specific to the aviation environment were incorporated. Hypotheses concerning the relationship of a small number of underlying factors, information processing mechanisms, and error types types identified in the classification scheme were formulated. ASRS data were reviewed and a simulation experiment was performed to evaluate and quantify the hypotheses.

  14. 78 FR 17155 - Standards for the Growing, Harvesting, Packing, and Holding of Produce for Human Consumption...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-20

    ...The Food and Drug Administration (FDA or we) is correcting the preamble to a proposed rule that published in the Federal Register of January 16, 2013. That proposed rule would establish science-based minimum standards for the safe growing, harvesting, packing, and holding of produce, meaning fruits and vegetables grown for human consumption. FDA proposed these standards as part of our implementation of the FDA Food Safety Modernization Act. The document published with several technical errors, including some errors in cross references, as well as several errors in reference numbers cited throughout the document. This document corrects those errors. We are also placing a corrected copy of the proposed rule in the docket.

  15. Urology technical and non-technical skills development: the emerging role of simulation.

    PubMed

    Rashid, Prem; Gianduzzo, Troy R J

    2016-04-01

    To review the emerging role of technical and non-technical simulation in urological education and training. A review was conducted to examine the current role of simulation in urology training. A PUBMED search of the terms 'urology training', 'urology simulation' and 'urology education' revealed 11,504 titles. Three hundred and fifty-seven abstracts were identified as English language, peer reviewed papers pertaining to the role of simulation in urology and related topics. Key papers were used to explore themes. Some cross-referenced papers were also included. There is an ongoing need to ensure that training time is efficiently utilised while ensuring that optimal technical and non-technical skills are achieved. Changing working conditions and the need to minimise patient harm by inadvertent errors must be taken into account. Simulation models for specific technical aspects have been the mainstay of graduated step-wise low and high fidelity training. Whole scenario environments as well as non-technical aspects can be slowly incorporated into the curriculum. Doing so should also help define what have been challenging competencies to teach and evaluate. Dedicated time, resources and trainer up-skilling are important. Concurrent studies are needed to help evaluate the effectiveness of introducing step-wise simulation for technical and non-technical competencies. Simulation based learning remains the best avenue of progressing surgical education. Technical and non-technical simulation could be used in the selection process. There are good economic, logistic and safety reasons to pursue the process of ongoing development of simulation co-curricula. While the role of simulation is assured, its progress will depend on a structured program that takes advantage of what can be delivered via this medium. Overall, simulation can be developed further for urological training programs to encompass technical and non-technical skill development at all stages, including

  16. Target Uncertainty Mediates Sensorimotor Error Correction

    PubMed Central

    Vijayakumar, Sethu; Wolpert, Daniel M.

    2017-01-01

    Human movements are prone to errors that arise from inaccuracies in both our perceptual processing and execution of motor commands. We can reduce such errors by both improving our estimates of the state of the world and through online error correction of the ongoing action. Two prominent frameworks that explain how humans solve these problems are Bayesian estimation and stochastic optimal feedback control. Here we examine the interaction between estimation and control by asking if uncertainty in estimates affects how subjects correct for errors that may arise during the movement. Unbeknownst to participants, we randomly shifted the visual feedback of their finger position as they reached to indicate the center of mass of an object. Even though participants were given ample time to compensate for this perturbation, they only fully corrected for the induced error on trials with low uncertainty about center of mass, with correction only partial in trials involving more uncertainty. The analysis of subjects’ scores revealed that participants corrected for errors just enough to avoid significant decrease in their overall scores, in agreement with the minimal intervention principle of optimal feedback control. We explain this behavior with a term in the loss function that accounts for the additional effort of adjusting one’s response. By suggesting that subjects’ decision uncertainty, as reflected in their posterior distribution, is a major factor in determining how their sensorimotor system responds to error, our findings support theoretical models in which the decision making and control processes are fully integrated. PMID:28129323

  17. Simultaneous Control of Error Rates in fMRI Data Analysis

    PubMed Central

    Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David

    2015-01-01

    The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the Likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to ‘cleaner’ looking brain maps and operationally superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain. PMID:26272730

  18. Preventing medical errors by designing benign failures.

    PubMed

    Grout, John R

    2003-07-01

    One way to successfully reduce medical errors is to design health care systems that are more resistant to the tendencies of human beings to err. One interdisciplinary approach entails creating design changes, mitigating human errors, and making human error irrelevant to outcomes. This approach is intended to facilitate the creation of benign failures, which have been called mistake-proofing devices and forcing functions elsewhere. USING FAULT TREES TO DESIGN FORCING FUNCTIONS: A fault tree is a graphical tool used to understand the relationships that either directly cause or contribute to the cause of a particular failure. A careful analysis of a fault tree enables the analyst to anticipate how the process will behave after the change. EXAMPLE OF AN APPLICATION: A scenario in which a patient is scalded while bathing can serve as an example of how multiple fault trees can be used to design forcing functions. The first fault tree shows the undesirable event--patient scalded while bathing. The second fault tree has a benign event--no water. Adding a scald valve changes the outcome from the undesirable event ("patient scalded while bathing") to the benign event ("no water") Analysis of fault trees does not ensure or guarantee that changes necessary to eliminate error actually occur. Most mistake-proofing is used to prevent simple errors and to create well-defended processes, but complex errors can also result. The utilization of mistake-proofing or forcing functions can be thought of as changing the logic of a process. Errors that formerly caused undesirable failures can be converted into the causes of benign failures. The use of fault trees can provide a variety of insights into the design of forcing functions that will improve patient safety.

  19. First-order error budgeting for LUVOIR mission

    NASA Astrophysics Data System (ADS)

    Lightsey, Paul A.; Knight, J. Scott; Feinberg, Lee D.; Bolcar, Matthew R.; Shaklan, Stuart B.

    2017-09-01

    Future large astronomical telescopes in space will have architectures that will have complex and demanding requirements to meet the science goals. The Large UV/Optical/IR Surveyor (LUVOIR) mission concept being assessed by the NASA/Goddard Space Flight Center is expected to be 9 to 15 meters in diameter, have a segmented primary mirror and be diffraction limited at a wavelength of 500 nanometers. The optical stability is expected to be in the picometer range for minutes to hours. Architecture studies to support the NASA Science and Technology Definition teams (STDTs) are underway to evaluate systems performance improvements to meet the science goals. To help define the technology needs and assess performance, a first order error budget has been developed. Like the JWST error budget, the error budget includes the active, adaptive and passive elements in spatial and temporal domains. JWST performance is scaled using first order approximations where appropriate and includes technical advances in telescope control.

  20. Navigation errors encountered using weather-mapping radar for helicopter IFR guidance to oil rigs

    NASA Technical Reports Server (NTRS)

    Phillips, J. D.; Bull, J. S.; Hegarty, D. M.; Dugan, D. C.

    1980-01-01

    In 1978 a joint NASA-FAA helicopter flight test was conducted to examine the use of weather-mapping radar for IFR guidance during landing approaches to oil rig helipads. The following navigation errors were measured: total system error, radar-range error, radar-bearing error, and flight technical error. Three problem areas were identified: (1) operational problems leading to pilot blunders, (2) poor navigation to the downwind final approach point, and (3) pure homing on final approach. Analysis of these problem areas suggests improvement in the radar equipment, approach procedure, and pilot training, and gives valuable insight into the development of future navigation aids to serve the off-shore oil industry.

  1. Error image aware content restoration

    NASA Astrophysics Data System (ADS)

    Choi, Sungwoo; Lee, Moonsik; Jung, Byunghee

    2015-12-01

    As the resolution of TV significantly increased, content consumers have become increasingly sensitive to the subtlest defect in TV contents. This rising standard in quality demanded by consumers has posed a new challenge in today's context where the tape-based process has transitioned to the file-based process: the transition necessitated digitalizing old archives, a process which inevitably produces errors such as disordered pixel blocks, scattered white noise, or totally missing pixels. Unsurprisingly, detecting and fixing such errors require a substantial amount of time and human labor to meet the standard demanded by today's consumers. In this paper, we introduce a novel, automated error restoration algorithm which can be applied to different types of classic errors by utilizing adjacent images while preserving the undamaged parts of an error image as much as possible. We tested our method to error images detected from our quality check system in KBS(Korean Broadcasting System) video archive. We are also implementing the algorithm as a plugin of well-known NLE(Non-linear editing system), which is a familiar tool for quality control agent.

  2. Human dignity and the future of the voluntary active euthanasia debate in South Africa.

    PubMed

    Jordaan, Donrich W

    2017-04-25

    The issue of voluntary active euthanasia was thrust into the public policy arena by the Stransham-Ford lawsuit. The High Court legalised voluntary active euthanasia - however, ostensibly only in the specific case of Mr Stransham-Ford. The Supreme Court of Appeal overturned the High Court judgment on technical grounds, not on the merits. This means that in future the courts can be approached again to consider the legalisation of voluntary active euthanasia. As such, Stransham-Ford presents a learning opportunity for both sides of the legalisation divide. In particular, conceptual errors pertaining to human dignity were made in Stransham-Ford, and can be avoided in future. In this article, I identify these errors and propose the following three corrective principles to inform future debate on the subject: (i) human dignity is violable; (ii) human suffering violates human dignity; and (iii) the 'natural' causes of suffering due to terminal illness do not exclude the application of human dignity.

  3. Comparison of technique errors of intraoral radiographs taken on film v photostimulable phosphor (PSP) plates.

    PubMed

    Zhang, Wenjian; Huynh, Carolyn P; Abramovitch, Kenneth; Leon, Inga-Lill K; Arvizu, Liliana

    2012-06-01

    The objective of this study was to compare the technical errors of intraoral radiographs exposed on film v photostimulable phosphor (PSP) plates. The intraoral radiographic images exposed on phantoms from preclinical practical exams of dental and dental hygiene students were used. Each exam consisted of 10 designated periapical and bitewing views. A total of 107 film sets and 122 PSP sets were evaluated for technique errors, including placement, elongation, foreshortening, overlapping, cone cut, receptor bending, density, mounting, dot in apical area, and others. Some errors were further subcategorized as minor, major, or remake depending on the severity. The percentages of radiographs with various errors were compared between film and PSP by the Fisher's Exact Test. Compared with film, there was significantly less PSP foreshortening, elongation, and bending errors, but significantly more placement and overlapping errors. Using a wrong sized receptor due to the similarity of the color of the package sleeves is a unique PSP error. Optimum image quality is attainable with PSP plates as well as film. When switching from film to a PSP digital environment, more emphasis is necessary for placing the PSP plates, especially those with excessive packet edge, and then correcting the corresponding angulation for the beam alignment. Better design for improving intraoral visibility and easy identification of different sized PSP will improve the clinician's technical performance with this receptor.

  4. Associations between errors and contributing factors in aircraft maintenance

    NASA Technical Reports Server (NTRS)

    Hobbs, Alan; Williamson, Ann

    2003-01-01

    In recent years cognitive error models have provided insights into the unsafe acts that lead to many accidents in safety-critical environments. Most models of accident causation are based on the notion that human errors occur in the context of contributing factors. However, there is a lack of published information on possible links between specific errors and contributing factors. A total of 619 safety occurrences involving aircraft maintenance were reported using a self-completed questionnaire. Of these occurrences, 96% were related to the actions of maintenance personnel. The types of errors that were involved, and the contributing factors associated with those actions, were determined. Each type of error was associated with a particular set of contributing factors and with specific occurrence outcomes. Among the associations were links between memory lapses and fatigue and between rule violations and time pressure. Potential applications of this research include assisting with the design of accident prevention strategies, the estimation of human error probabilities, and the monitoring of organizational safety performance.

  5. Research on error control and compensation in magnetorheological finishing.

    PubMed

    Dai, Yifan; Hu, Hao; Peng, Xiaoqiang; Wang, Jianmin; Shi, Feng

    2011-07-01

    Although magnetorheological finishing (MRF) is a deterministic finishing technology, the machining results always fall short of simulation precision in the actual process, and it cannot meet the precision requirements just through a single treatment but after several iterations. We investigate the reasons for this problem through simulations and experiments. Through controlling and compensating the chief errors in the manufacturing procedure, such as removal function calculation error, positioning error of the removal function, and dynamic performance limitation of the CNC machine, the residual error convergence ratio (ratio of figure error before and after processing) in a single process is obviously increased, and higher figure precision is achieved. Finally, an improved technical process is presented based on these researches, and the verification experiment is accomplished on the experimental device we developed. The part is a circular plane mirror of fused silica material, and the surface figure error is improved from the initial λ/5 [peak-to-valley (PV) λ=632.8 nm], λ/30 [root-mean-square (rms)] to the final λ/40 (PV), λ/330 (rms) just through one iteration in 4.4 min. Results show that a higher convergence ratio and processing precision can be obtained by adopting error control and compensation techniques in MRF.

  6. Quantum error correction assisted by two-way noisy communication

    PubMed Central

    Wang, Zhuo; Yu, Sixia; Fan, Heng; Oh, C. H.

    2014-01-01

    Pre-shared non-local entanglement dramatically simplifies and improves the performance of quantum error correction via entanglement-assisted quantum error-correcting codes (EAQECCs). However, even considering the noise in quantum communication only, the non-local sharing of a perfectly entangled pair is technically impossible unless additional resources are consumed, such as entanglement distillation, which actually compromises the efficiency of the codes. Here we propose an error-correcting protocol assisted by two-way noisy communication that is more easily realisable: all quantum communication is subjected to general noise and all entanglement is created locally without additional resources consumed. In our protocol the pre-shared noisy entangled pairs are purified simultaneously by the decoding process. For demonstration, we first present an easier implementation of the well-known EAQECC [[4, 1, 3; 1

  7. Quantum error correction assisted by two-way noisy communication.

    PubMed

    Wang, Zhuo; Yu, Sixia; Fan, Heng; Oh, C H

    2014-11-26

    Pre-shared non-local entanglement dramatically simplifies and improves the performance of quantum error correction via entanglement-assisted quantum error-correcting codes (EAQECCs). However, even considering the noise in quantum communication only, the non-local sharing of a perfectly entangled pair is technically impossible unless additional resources are consumed, such as entanglement distillation, which actually compromises the efficiency of the codes. Here we propose an error-correcting protocol assisted by two-way noisy communication that is more easily realisable: all quantum communication is subjected to general noise and all entanglement is created locally without additional resources consumed. In our protocol the pre-shared noisy entangled pairs are purified simultaneously by the decoding process. For demonstration, we first present an easier implementation of the well-known EAQECC [[4, 1, 3; 1

  8. An Error Analysis for the Finite Element Method Applied to Convection Diffusion Problems.

    DTIC Science & Technology

    1981-03-01

    D TFhG-]NOLOGY k 4b 00 \\" ) ’b Technical Note BN-962 AN ERROR ANALYSIS FOR THE FINITE ELEMENT METHOD APPLIED TO CONVECTION DIFFUSION PROBLEM by I...Babu~ka and W. G. Szym’czak March 1981 V.. UNVI I Of- ’i -S AN ERROR ANALYSIS FOR THE FINITE ELEMENT METHOD P. - 0 w APPLIED TO CONVECTION DIFFUSION ...AOAO98 895 MARYLAND UNIVYCOLLEGE PARK INST FOR PHYSICAL SCIENCE--ETC F/G 12/I AN ERROR ANALYIS FOR THE FINITE ELEMENT METHOD APPLIED TO CONV..ETC (U

  9. Empirical Analysis of Systematic Communication Errors.

    DTIC Science & Technology

    1981-09-01

    human o~ . .... 8 components in communication systems. (Systematic errors were defined to be those that occur regularly in human communication links...phase of the human communication process and focuses on the linkage between a specific piece of information (and the receiver) and the transmission...communication flow. (2) Exchange. Exchange is the next phase in human communication and entails a concerted effort on the part of the sender and receiver to share

  10. Competition between learned reward and error outcome predictions in anterior cingulate cortex.

    PubMed

    Alexander, William H; Brown, Joshua W

    2010-02-15

    The anterior cingulate cortex (ACC) is implicated in performance monitoring and cognitive control. Non-human primate studies of ACC show prominent reward signals, but these are elusive in human studies, which instead show mainly conflict and error effects. Here we demonstrate distinct appetitive and aversive activity in human ACC. The error likelihood hypothesis suggests that ACC activity increases in proportion to the likelihood of an error, and ACC is also sensitive to the consequence magnitude of the predicted error. Previous work further showed that error likelihood effects reach a ceiling as the potential consequences of an error increase, possibly due to reductions in the average reward. We explored this issue by independently manipulating reward magnitude of task responses and error likelihood while controlling for potential error consequences in an Incentive Change Signal Task. The fMRI results ruled out a modulatory effect of expected reward on error likelihood effects in favor of a competition effect between expected reward and error likelihood. Dynamic causal modeling showed that error likelihood and expected reward signals are intrinsic to the ACC rather than received from elsewhere. These findings agree with interpretations of ACC activity as signaling both perceptions of risk and predicted reward. Copyright 2009 Elsevier Inc. All rights reserved.

  11. Exploring Reactions to Pilot Reliability Certification and Changing Attitudes on the Reduction of Errors

    ERIC Educational Resources Information Center

    Boedigheimer, Dan

    2010-01-01

    Approximately 70% of aviation accidents are attributable to human error. The greatest opportunity for further improving aviation safety is found in reducing human errors in the cockpit. The purpose of this quasi-experimental, mixed-method research was to evaluate whether there was a difference in pilot attitudes toward reducing human error in the…

  12. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND... rates, which is defined as the percentage of cases with an error (expressed as the total number of cases with an error compared to the total number of cases); the percentage of cases with an improper payment...

  13. 'Systemic Failures' and 'Human Error' in Canadian TSB Aviation Reports Between 1996 and 2002

    NASA Technical Reports Server (NTRS)

    Holloway, C. M.; Johnson, C. W.

    2004-01-01

    This paper describes the results of an independent analysis of the primary and contributory causes of aviation accidents in Canada between 1996 and 2003. The purpose of the study was to assess the comparative frequency of a range of causal factors in the reporting of these adverse events. Our results suggest that the majority of these high consequence accidents were attributed to human error. A large number of reports also mentioned wider systemic issues, including the managerial and regulatory context of aviation operations. These issues are more likely to appear as contributory rather than primary causes in this set of accident reports.

  14. Which non-technical skills do junior doctors require to prescribe safely? A systematic review.

    PubMed

    Dearden, Effie; Mellanby, Edward; Cameron, Helen; Harden, Jeni

    2015-12-01

    Prescribing errors are a major source of avoidable morbidity and mortality. Junior doctors write most in-hospital prescriptions and are the least experienced members of the healthcare team. This puts them at high risk of error and makes them attractive targets for interventions to improve prescription safety. Error analysis has shown a background of complex environments with multiple contributory conditions. Similar conditions in other high risk industries, such as aviation, have led to an increased understanding of so-called human factors and the use of non-technical skills (NTS) training to try to reduce error. To date no research has examined the NTS required for safe prescribing. The aim of this review was to develop a prototype NTS taxonomy for safe prescribing, by junior doctors, in hospital settings. A systematic search identified 14 studies analyzing prescribing behaviours and errors by junior doctors. Framework analysis was used to extract data from the studies and identify behaviours related to categories of NTS that might be relevant to safe and effective prescribing performance by junior doctors. Categories were derived from existing literature and inductively from the data. A prototype taxonomy of relevant categories (situational awareness, decision making, communication and team working, and task management) and elements was constructed. This prototype will form the basis of future work to create a tool that can be used for training and assessment of medical students and junior doctors to reduce prescribing error in the future. © 2015 The British Pharmacological Society.

  15. Human Factors Risk Analyses of a Doffing Protocol for Ebola-Level Personal Protective Equipment: Mapping Errors to Contamination.

    PubMed

    Mumma, Joel M; Durso, Francis T; Ferguson, Ashley N; Gipson, Christina L; Casanova, Lisa; Erukunuakpor, Kimberly; Kraft, Colleen S; Walsh, Victoria L; Zimring, Craig; DuBose, Jennifer; Jacob, Jesse T

    2018-03-05

    Doffing protocols for personal protective equipment (PPE) are critical for keeping healthcare workers (HCWs) safe during care of patients with Ebola virus disease. We assessed the relationship between errors and self-contamination during doffing. Eleven HCWs experienced with doffing Ebola-level PPE participated in simulations in which HCWs donned PPE marked with surrogate viruses (ɸ6 and MS2), completed a clinical task, and were assessed for contamination after doffing. Simulations were video recorded, and a failure modes and effects analysis and fault tree analyses were performed to identify errors during doffing, quantify their risk (risk index), and predict contamination data. Fifty-one types of errors were identified, many having the potential to spread contamination. Hand hygiene and removing the powered air purifying respirator (PAPR) hood had the highest total risk indexes (111 and 70, respectively) and number of types of errors (9 and 13, respectively). ɸ6 was detected on 10% of scrubs and the fault tree predicted a 10.4% contamination rate, likely occurring when the PAPR hood inadvertently contacted scrubs during removal. MS2 was detected on 10% of hands, 20% of scrubs, and 70% of inner gloves and the predicted rates were 7.3%, 19.4%, 73.4%, respectively. Fault trees for MS2 and ɸ6 contamination suggested similar pathways. Ebola-level PPE can both protect and put HCWs at risk for self-contamination throughout the doffing process, even among experienced HCWs doffing with a trained observer. Human factors methodologies can identify error-prone steps, delineate the relationship between errors and self-contamination, and suggest remediation strategies.

  16. Visual Prediction Error Spreads Across Object Features in Human Visual Cortex

    PubMed Central

    Summerfield, Christopher; Egner, Tobias

    2016-01-01

    Visual cognition is thought to rely heavily on contextual expectations. Accordingly, previous studies have revealed distinct neural signatures for expected versus unexpected stimuli in visual cortex. However, it is presently unknown how the brain combines multiple concurrent stimulus expectations such as those we have for different features of a familiar object. To understand how an unexpected object feature affects the simultaneous processing of other expected feature(s), we combined human fMRI with a task that independently manipulated expectations for color and motion features of moving-dot stimuli. Behavioral data and neural signals from visual cortex were then interrogated to adjudicate between three possible ways in which prediction error (surprise) in the processing of one feature might affect the concurrent processing of another, expected feature: (1) feature processing may be independent; (2) surprise might “spread” from the unexpected to the expected feature, rendering the entire object unexpected; or (3) pairing a surprising feature with an expected feature might promote the inference that the two features are not in fact part of the same object. To formalize these rival hypotheses, we implemented them in a simple computational model of multifeature expectations. Across a range of analyses, behavior and visual neural signals consistently supported a model that assumes a mixing of prediction error signals across features: surprise in one object feature spreads to its other feature(s), thus rendering the entire object unexpected. These results reveal neurocomputational principles of multifeature expectations and indicate that objects are the unit of selection for predictive vision. SIGNIFICANCE STATEMENT We address a key question in predictive visual cognition: how does the brain combine multiple concurrent expectations for different features of a single object such as its color and motion trajectory? By combining a behavioral protocol that

  17. Extending human potential in a technical learning environment

    NASA Astrophysics Data System (ADS)

    Fielden, Kay A.

    This thesis is a report of a participatory inquiry process looking at enhancing the learning process in a technical academic field in high education by utilising tools and techniques which go beyond the rational/logical, intellectual domain in a functional, objective world. By empathising with, nurturing and sustaining the whole person, and taking account of past patterning as well as future visions including technological advances to augment human awareness, the scene is set for depth learning. Depth learning in a tertiary environment can only happen as a result of the dynamic that exists between the dominant, logical/rational, intellectual paradigm and the experiential extension of the boundaries surrounding this domain. Any experiences which suppress the full, holistic expression of our being alienate us from the fullness of the expression and hence from depth learning. Depth learning is indicated by intrinsic motivation, which is more likely to occur in a trusting and supporting environment. The research took place within a systemic intellectual framework, where emergence is the prime characteristic used to evaluate results.

  18. [Medical errors: inevitable but preventable].

    PubMed

    Giard, R W

    2001-10-27

    Medical errors are increasingly reported in the lay press. Studies have shown dramatic error rates of 10 percent or even higher. From a methodological point of view, studying the frequency and causes of medical errors is far from simple. Clinical decisions on diagnostic or therapeutic interventions are always taken within a clinical context. Reviewing outcomes of interventions without taking into account both the intentions and the arguments for a particular action will limit the conclusions from a study on the rate and preventability of errors. The interpretation of the preventability of medical errors is fraught with difficulties and probably highly subjective. Blaming the doctor personally does not do justice to the actual situation and especially the organisational framework. Attention for and improvement of the organisational aspects of error are far more important then litigating the person. To err is and will remain human and if we want to reduce the incidence of faults we must be able to learn from our mistakes. That requires an open attitude towards medical mistakes, a continuous effort in their detection, a sound analysis and, where feasible, the institution of preventive measures.

  19. Physician assistants and the disclosure of medical error.

    PubMed

    Brock, Douglas M; Quella, Alicia; Lipira, Lauren; Lu, Dave W; Gallagher, Thomas H

    2014-06-01

    Evolving state law, professional societies, and national guidelines, including those of the American Medical Association and Joint Commission, recommend that patients receive transparent communication when a medical error occurs. Recommendations for error disclosure typically consist of an explanation that an error has occurred, delivery of an explicit apology, an explanation of the facts around the event, its medical ramifications and how care will be managed, and a description of how similar errors will be prevented in the future. Although error disclosure is widely endorsed in the medical and nursing literature, there is little discussion of the unique role that the physician assistant (PA) might play in these interactions. PAs are trained in the medical model and technically practice under the supervision of a physician. They are also commonly integrated into interprofessional health care teams in surgical and urgent care settings. PA practice is characterized by widely varying degrees of provider autonomy. How PAs should collaborate with physicians in sensitive error disclosure conversations with patients is unclear. With the number of practicing PAs growing rapidly in nearly all domains of medicine, their role in the error disclosure process warrants exploration. The authors call for educational societies and accrediting agencies to support policy to establish guidelines for PA disclosure of error. They encourage medical and PA researchers to explore and report best-practice disclosure roles for PAs. Finally, they recommend that PA educational programs implement trainings in disclosure skills, and hospitals and supervising physicians provide and support training for practicing PAs.

  20. Reward positivity: Reward prediction error or salience prediction error?

    PubMed

    Heydari, Sepideh; Holroyd, Clay B

    2016-08-01

    The reward positivity is a component of the human ERP elicited by feedback stimuli in trial-and-error learning and guessing tasks. A prominent theory holds that the reward positivity reflects a reward prediction error signal that is sensitive to outcome valence, being larger for unexpected positive events relative to unexpected negative events (Holroyd & Coles, 2002). Although the theory has found substantial empirical support, most of these studies have utilized either monetary or performance feedback to test the hypothesis. However, in apparent contradiction to the theory, a recent study found that unexpected physical punishments also elicit the reward positivity (Talmi, Atkinson, & El-Deredy, 2013). The authors of this report argued that the reward positivity reflects a salience prediction error rather than a reward prediction error. To investigate this finding further, in the present study participants navigated a virtual T maze and received feedback on each trial under two conditions. In a reward condition, the feedback indicated that they would either receive a monetary reward or not and in a punishment condition the feedback indicated that they would receive a small shock or not. We found that the feedback stimuli elicited a typical reward positivity in the reward condition and an apparently delayed reward positivity in the punishment condition. Importantly, this signal was more positive to the stimuli that predicted the omission of a possible punishment relative to stimuli that predicted a forthcoming punishment, which is inconsistent with the salience hypothesis. © 2016 Society for Psychophysiological Research.

  1. Preventable Medical Errors Driven Modeling of Medical Best Practice Guidance Systems.

    PubMed

    Ou, Andrew Y-Z; Jiang, Yu; Wu, Po-Liang; Sha, Lui; Berlin, Richard B

    2017-01-01

    In a medical environment such as Intensive Care Unit, there are many possible reasons to cause errors, and one important reason is the effect of human intellectual tasks. When designing an interactive healthcare system such as medical Cyber-Physical-Human Systems (CPHSystems), it is important to consider whether the system design can mitigate the errors caused by these tasks or not. In this paper, we first introduce five categories of generic intellectual tasks of humans, where tasks among each category may lead to potential medical errors. Then, we present an integrated modeling framework to model a medical CPHSystem and use UPPAAL as the foundation to integrate and verify the whole medical CPHSystem design models. With a verified and comprehensive model capturing the human intellectual tasks effects, we can design a more accurate and acceptable system. We use a cardiac arrest resuscitation guidance and navigation system (CAR-GNSystem) for such medical CPHSystem modeling. Experimental results show that the CPHSystem models help determine system design flaws and can mitigate the potential medical errors caused by the human intellectual tasks.

  2. Teamwork and error in the operating room: analysis of skills and roles.

    PubMed

    Catchpole, K; Mishra, A; Handa, A; McCulloch, P

    2008-04-01

    To analyze the effects of surgical, anesthetic, and nursing teamwork skills on technical outcomes. The value of team skills in reducing adverse events in the operating room is presently receiving considerable attention. Current work has not yet identified in detail how the teamwork and communication skills of surgeons, anesthetists, and nurses affect the course of an operation. Twenty-six laparoscopic cholecystectomies and 22 carotid endarterectomies were studied using direct observation methods. For each operation, teams' skills were scored for the whole team, and for nursing, surgical, and anesthetic subteams on 4 dimensions (leadership and management [LM]; teamwork and cooperation; problem solving and decision making; and situation awareness). Operating time, errors in surgical technique, and other procedural problems and errors were measured as outcome parameters for each operation. The relationships between teamwork scores and these outcome parameters within each operation were examined using analysis of variance and linear regression. Surgical (F(2,42) = 3.32, P = 0.046) and anesthetic (F(2,42) = 3.26, P = 0.048) LM had significant but opposite relationships with operating time in each operation: operating time increased significantly with higher anesthetic but decreased with higher surgical LM scores. Errors in surgical technique had a strong association with surgical situation awareness (F(2,42) = 7.93, P < 0.001) in each operation. Other procedural problems and errors were related to the intraoperative LM skills of the nurses (F(5,1) = 3.96, P = 0.027). Detailed analysis of team interactions and dimensions is feasible and valuable, yielding important insights into relationships between nontechnical skills, technical performance, and operative duration. These results support the concept that interventions designed to improve teamwork and communication may have beneficial effects on technical performance and patient outcome.

  3. A circadian rhythm in skill-based errors in aviation maintenance.

    PubMed

    Hobbs, Alan; Williamson, Ann; Van Dongen, Hans P A

    2010-07-01

    In workplaces where activity continues around the clock, human error has been observed to exhibit a circadian rhythm, with a characteristic peak in the early hours of the morning. Errors are commonly distinguished by the nature of the underlying cognitive failure, particularly the level of intentionality involved in the erroneous action. The Skill-Rule-Knowledge (SRK) framework of Rasmussen is used widely in the study of industrial errors and accidents. The SRK framework describes three fundamental types of error, according to whether behavior is under the control of practiced sensori-motor skill routines with minimal conscious awareness; is guided by implicit or explicit rules or expertise; or where the planning of actions requires the conscious application of domain knowledge. Up to now, examinations of circadian patterns of industrial errors have not distinguished between different types of error. Consequently, it is not clear whether all types of error exhibit the same circadian rhythm. A survey was distributed to aircraft maintenance personnel in Australia. Personnel were invited to anonymously report a safety incident and were prompted to describe, in detail, the human involvement (if any) that contributed to it. A total of 402 airline maintenance personnel reported an incident, providing 369 descriptions of human error in which the time of the incident was reported and sufficient detail was available to analyze the error. Errors were categorized using a modified version of the SRK framework, in which errors are categorized as skill-based, rule-based, or knowledge-based, or as procedure violations. An independent check confirmed that the SRK framework had been applied with sufficient consistency and reliability. Skill-based errors were the most common form of error, followed by procedure violations, rule-based errors, and knowledge-based errors. The frequency of errors was adjusted for the estimated proportion of workers present at work/each hour of the day

  4. Tutorial on Reed-Solomon error correction coding

    NASA Technical Reports Server (NTRS)

    Geisel, William A.

    1990-01-01

    This tutorial attempts to provide a frank, step-by-step approach to Reed-Solomon (RS) error correction coding. RS encoding and RS decoding both with and without erasing code symbols are emphasized. There is no need to present rigorous proofs and extreme mathematical detail. Rather, the simple concepts of groups and fields, specifically Galois fields, are presented with a minimum of complexity. Before RS codes are presented, other block codes are presented as a technical introduction into coding. A primitive (15, 9) RS coding example is then completely developed from start to finish, demonstrating the encoding and decoding calculations and a derivation of the famous error-locator polynomial. The objective is to present practical information about Reed-Solomon coding in a manner such that it can be easily understood.

  5. EDI and the Technical Communicator.

    ERIC Educational Resources Information Center

    Eiler, Mary Ann

    1994-01-01

    Assesses the role of technical communicators in electronic data interchange (EDI). Argues that, as experts in information design, human factors, instructional theory, and professional writing, technical communicators should be advocates of standard documentation protocols and should rethink the traditional concepts of "document" to…

  6. Use of units of measurement error in anthropometric comparisons.

    PubMed

    Lucas, Teghan; Henneberg, Maciej

    2017-09-01

    Anthropometrists attempt to minimise measurement errors, however, errors cannot be eliminated entirely. Currently, measurement errors are simply reported. Measurement errors should be included into analyses of anthropometric data. This study proposes a method which incorporates measurement errors into reported values, replacing metric units with 'units of technical error of measurement (TEM)' by applying these to forensics, industrial anthropometry and biological variation. The USA armed forces anthropometric survey (ANSUR) contains 132 anthropometric dimensions of 3982 individuals. Concepts of duplication and Euclidean distance calculations were applied to the forensic-style identification of individuals in this survey. The National Size and Shape Survey of Australia contains 65 anthropometric measurements of 1265 women. This sample was used to show how a woman's body measurements expressed in TEM could be 'matched' to standard clothing sizes. Euclidean distances show that two sets of repeated anthropometric measurements of the same person cannot be matched (> 0) on measurements expressed in millimetres but can in units of TEM (= 0). Only 81 women can fit into any standard clothing size when matched using centimetres, with units of TEM, 1944 women fit. The proposed method can be applied to all fields that use anthropometry. Units of TEM are considered a more reliable unit of measurement for comparisons.

  7. Error-related brain activity and error awareness in an error classification paradigm.

    PubMed

    Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E

    2016-10-01

    Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Coherent errors in quantum error correction

    NASA Astrophysics Data System (ADS)

    Greenbaum, Daniel; Dutton, Zachary

    Analysis of quantum error correcting (QEC) codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. We present analytic results for the logical error as a function of concatenation level and code distance for coherent errors under the repetition code. For data-only coherent errors, we find that the logical error is partially coherent and therefore non-Pauli. However, the coherent part of the error is negligible after two or more concatenation levels or at fewer than ɛ - (d - 1) error correction cycles. Here ɛ << 1 is the rotation angle error per cycle for a single physical qubit and d is the code distance. These results support the validity of modeling coherent errors using a Pauli channel under some minimum requirements for code distance and/or concatenation. We discuss extensions to imperfect syndrome extraction and implications for general QEC.

  9. Effect of Transducer Orientation on Errors in Ultrasound Image-Based Measurements of Human Medial Gastrocnemius Muscle Fascicle Length and Pennation

    PubMed Central

    Gandevia, Simon C.; Herbert, Robert D.

    2016-01-01

    Ultrasound imaging is often used to measure muscle fascicle lengths and pennation angles in human muscles in vivo. Theoretically the most accurate measurements are made when the transducer is oriented so that the image plane aligns with muscle fascicles and, for measurements of pennation, when the image plane also intersects the aponeuroses perpendicularly. However this orientation is difficult to achieve and usually there is some degree of misalignment. Here, we used simulated ultrasound images based on three-dimensional models of the human medial gastrocnemius, derived from magnetic resonance and diffusion tensor images, to describe the relationship between transducer orientation and measurement errors. With the transducer oriented perpendicular to the surface of the leg, the error in measurement of fascicle lengths was about 0.4 mm per degree of misalignment of the ultrasound image with the muscle fascicles. If the transducer is then tipped by 20°, the error increases to 1.1 mm per degree of misalignment. For a given degree of misalignment of muscle fascicles with the image plane, the smallest absolute error in fascicle length measurements occurs when the transducer is held perpendicular to the surface of the leg. Misalignment of the transducer with the fascicles may cause fascicle length measurements to be underestimated or overestimated. Contrary to widely held beliefs, it is shown that pennation angles are always overestimated if the image is not perpendicular to the aponeurosis, even when the image is perfectly aligned with the fascicles. An analytical explanation is provided for this finding. PMID:27294280

  10. Effect of Transducer Orientation on Errors in Ultrasound Image-Based Measurements of Human Medial Gastrocnemius Muscle Fascicle Length and Pennation.

    PubMed

    Bolsterlee, Bart; Gandevia, Simon C; Herbert, Robert D

    2016-01-01

    Ultrasound imaging is often used to measure muscle fascicle lengths and pennation angles in human muscles in vivo. Theoretically the most accurate measurements are made when the transducer is oriented so that the image plane aligns with muscle fascicles and, for measurements of pennation, when the image plane also intersects the aponeuroses perpendicularly. However this orientation is difficult to achieve and usually there is some degree of misalignment. Here, we used simulated ultrasound images based on three-dimensional models of the human medial gastrocnemius, derived from magnetic resonance and diffusion tensor images, to describe the relationship between transducer orientation and measurement errors. With the transducer oriented perpendicular to the surface of the leg, the error in measurement of fascicle lengths was about 0.4 mm per degree of misalignment of the ultrasound image with the muscle fascicles. If the transducer is then tipped by 20°, the error increases to 1.1 mm per degree of misalignment. For a given degree of misalignment of muscle fascicles with the image plane, the smallest absolute error in fascicle length measurements occurs when the transducer is held perpendicular to the surface of the leg. Misalignment of the transducer with the fascicles may cause fascicle length measurements to be underestimated or overestimated. Contrary to widely held beliefs, it is shown that pennation angles are always overestimated if the image is not perpendicular to the aponeurosis, even when the image is perfectly aligned with the fascicles. An analytical explanation is provided for this finding.

  11. Common errors in multidrug-resistant tuberculosis management.

    PubMed

    Monedero, Ignacio; Caminero, Jose A

    2014-02-01

    Multidrug-resistant tuberculosis (MDR-TB), defined as being resistant to at least rifampicin and isoniazid, has an increasing burden and threatens TB control. Diagnosis is limited and usually delayed while treatment is long lasting, toxic and poorly effective. MDR-TB management in scarce-resource settings is demanding however it is feasible and extremely necessary. In these settings, cure rates do not usually exceed 60-70% and MDR-TB management is novel for many TB programs. In this challenging scenario, both clinical and programmatic errors are likely to occur. The majority of these errors may be prevented or alleviated with appropriate and timely training in addition to uninterrupted procurement of high-quality drugs, updated national guidelines and laws and an overall improvement in management capacities. While new tools for diagnosis and shorter and less toxic treatment are not available in developing countries, MDR-TB management will remain complex in scarce resource settings. Focusing special attention on the common errors in diagnosis, regimen design and especially treatment delivery may benefit patients and programs with current outdated tools. The present article is a compilation of typical errors repeatedly observed by the authors in a wide range of countries during technical assistant missions and trainings.

  12. Head Start Impact Study. Technical Report

    ERIC Educational Resources Information Center

    Puma, Michael; Bell, Stephen; Cook, Ronna; Heid, Camilla; Shapiro, Gary; Broene, Pam; Jenkins, Frank; Fletcher, Philip; Quinn, Liz; Friedman, Janet; Ciarico, Janet; Rohacek, Monica; Adams, Gina; Spier, Elizabeth

    2010-01-01

    This Technical Report is designed to provide technical detail to support the analysis and findings presented in the "Head Start Impact Study Final Report" (U.S. Department of Health and Human Services, January 2010). Chapter 1 provides an overview of the Head Start Impact Study and its findings. Chapter 2 provides technical information on the…

  13. SIMULATED HUMAN ERROR PROBABILITY AND ITS APPLICATION TO DYNAMIC HUMAN FAILURE EVENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herberger, Sarah M.; Boring, Ronald L.

    Abstract Objectives: Human reliability analysis (HRA) methods typically analyze human failure events (HFEs) at the overall task level. For dynamic HRA, it is important to model human activities at the subtask level. There exists a disconnect between dynamic subtask level and static task level that presents issues when modeling dynamic scenarios. For example, the SPAR-H method is typically used to calculate the human error probability (HEP) at the task level. As demonstrated in this paper, quantification in SPAR-H does not translate to the subtask level. Methods: Two different discrete distributions were generated for each SPAR-H Performance Shaping Factor (PSF) tomore » define the frequency of PSF levels. The first distribution was a uniform, or uninformed distribution that assumed the frequency of each PSF level was equally likely. The second non-continuous distribution took the frequency of PSF level as identified from an assessment of the HERA database. These two different approaches were created to identify the resulting distribution of the HEP. The resulting HEP that appears closer to the known distribution, a log-normal centered on 1E-3, is the more desirable. Each approach then has median, average and maximum HFE calculations applied. To calculate these three values, three events, A, B and C are generated from the PSF level frequencies comprised of subtasks. The median HFE selects the median PSF level from each PSF and calculates HEP. The average HFE takes the mean PSF level, and the maximum takes the maximum PSF level. The same data set of subtask HEPs yields starkly different HEPs when aggregated to the HFE level in SPAR-H. Results: Assuming that each PSF level in each HFE is equally likely creates an unrealistic distribution of the HEP that is centered at 1. Next the observed frequency of PSF levels was applied with the resulting HEP behaving log-normally with a majority of the values under 2.5% HEP. The median, average and maximum HFE calculations did

  14. Incident reporting: Its role in aviation safety and the acquisition of human error data

    NASA Technical Reports Server (NTRS)

    Reynard, W. D.

    1983-01-01

    The rationale for aviation incident reporting systems is presented and contrasted to some of the shortcomings of accident investigation procedures. The history of the United State's Aviation Safety Reporting System (ASRS) is outlined and the program's character explained. The planning elements that resulted in the ASRS program's voluntary, confidential, and non-punitive design are discussed. Immunity, from enforcement action and misuse of the volunteered data, is explained and evaluated. Report generation techniques and the ASRS data analysis process are described; in addition, examples of the ASRS program's output and accomplishments are detailed. Finally, the value of incident reporting for the acquisition of safety information, particularly human error data, is explored.

  15. Sensitivity to prediction error in reach adaptation

    PubMed Central

    Haith, Adrian M.; Harran, Michelle D.; Shadmehr, Reza

    2012-01-01

    It has been proposed that the brain predicts the sensory consequences of a movement and compares it to the actual sensory feedback. When the two differ, an error signal is formed, driving adaptation. How does an error in one trial alter performance in the subsequent trial? Here we show that the sensitivity to error is not constant but declines as a function of error magnitude. That is, one learns relatively less from large errors compared with small errors. We performed an experiment in which humans made reaching movements and randomly experienced an error in both their visual and proprioceptive feedback. Proprioceptive errors were created with force fields, and visual errors were formed by perturbing the cursor trajectory to create a visual error that was smaller, the same size, or larger than the proprioceptive error. We measured single-trial adaptation and calculated sensitivity to error, i.e., the ratio of the trial-to-trial change in motor commands to error size. We found that for both sensory modalities sensitivity decreased with increasing error size. A reanalysis of a number of previously published psychophysical results also exhibited this feature. Finally, we asked how the brain might encode sensitivity to error. We reanalyzed previously published probabilities of cerebellar complex spikes (CSs) and found that this probability declined with increasing error size. From this we posit that a CS may be representative of the sensitivity to error, and not error itself, a hypothesis that may explain conflicting reports about CSs and their relationship to error. PMID:22773782

  16. Comprehensive Anti-error Study on Power Grid Dispatching Based on Regional Regulation and Integration

    NASA Astrophysics Data System (ADS)

    Zhang, Yunju; Chen, Zhongyi; Guo, Ming; Lin, Shunsheng; Yan, Yinyang

    2018-01-01

    With the large capacity of the power system, the development trend of the large unit and the high voltage, the scheduling operation is becoming more frequent and complicated, and the probability of operation error increases. This paper aims at the problem of the lack of anti-error function, single scheduling function and low working efficiency for technical support system in regional regulation and integration, the integrated construction of the error prevention of the integrated architecture of the system of dispatching anti - error of dispatching anti - error of power network based on cloud computing has been proposed. Integrated system of error prevention of Energy Management System, EMS, and Operation Management System, OMS have been constructed either. The system architecture has good scalability and adaptability, which can improve the computational efficiency, reduce the cost of system operation and maintenance, enhance the ability of regional regulation and anti-error checking with broad development prospects.

  17. Understanding reliance on automation: effects of error type, error distribution, age and experience.

    PubMed

    Sanchez, Julian; Rogers, Wendy A; Fisk, Arthur D; Rovira, Ericka

    2014-03-01

    An obstacle detection task supported by "imperfect" automation was used with the goal of understanding the effects of automation error types and age on automation reliance. Sixty younger and sixty older adults interacted with a multi-task simulation of an agricultural vehicle (i.e. a virtual harvesting combine). The simulator included an obstacle detection task and a fully manual tracking task. A micro-level analysis provided insight into the way reliance patterns change over time. The results indicated that there are distinct patterns of reliance that develop as a function of error type. A prevalence of automation false alarms led participants to under-rely on the automation during alarm states while over relying on it during non-alarms states. Conversely, a prevalence of automation misses led participants to over-rely on automated alarms and under-rely on the automation during non-alarm states. Older adults adjusted their behavior according to the characteristics of the automation similarly to younger adults, although it took them longer to do so. The results of this study suggest the relationship between automation reliability and reliance depends on the prevalence of specific errors and on the state of the system. Understanding the effects of automation detection criterion settings on human-automation interaction can help designers of automated systems make predictions about human behavior and system performance as a function of the characteristics of the automation.

  18. Neurochemical enhancement of conscious error awareness.

    PubMed

    Hester, Robert; Nandam, L Sanjay; O'Connell, Redmond G; Wagner, Joe; Strudwick, Mark; Nathan, Pradeep J; Mattingley, Jason B; Bellgrove, Mark A

    2012-02-22

    How the brain monitors ongoing behavior for performance errors is a central question of cognitive neuroscience. Diminished awareness of performance errors limits the extent to which humans engage in corrective behavior and has been linked to loss of insight in a number of psychiatric syndromes (e.g., attention deficit hyperactivity disorder, drug addiction). These conditions share alterations in monoamine signaling that may influence the neural mechanisms underlying error processing, but our understanding of the neurochemical drivers of these processes is limited. We conducted a randomized, double-blind, placebo-controlled, cross-over design of the influence of methylphenidate, atomoxetine, and citalopram on error awareness in 27 healthy participants. The error awareness task, a go/no-go response inhibition paradigm, was administered to assess the influence of monoaminergic agents on performance errors during fMRI data acquisition. A single dose of methylphenidate, but not atomoxetine or citalopram, significantly improved the ability of healthy volunteers to consciously detect performance errors. Furthermore, this behavioral effect was associated with a strengthening of activation differences in the dorsal anterior cingulate cortex and inferior parietal lobe during the methylphenidate condition for errors made with versus without awareness. Our results have implications for the understanding of the neurochemical underpinnings of performance monitoring and for the pharmacological treatment of a range of disparate clinical conditions that are marked by poor awareness of errors.

  19. Risk assessment of component failure modes and human errors using a new FMECA approach: application in the safety analysis of HDR brachytherapy.

    PubMed

    Giardina, M; Castiglia, F; Tomarchio, E

    2014-12-01

    Failure mode, effects and criticality analysis (FMECA) is a safety technique extensively used in many different industrial fields to identify and prevent potential failures. In the application of traditional FMECA, the risk priority number (RPN) is determined to rank the failure modes; however, the method has been criticised for having several weaknesses. Moreover, it is unable to adequately deal with human errors or negligence. In this paper, a new versatile fuzzy rule-based assessment model is proposed to evaluate the RPN index to rank both component failure and human error. The proposed methodology is applied to potential radiological over-exposure of patients during high-dose-rate brachytherapy treatments. The critical analysis of the results can provide recommendations and suggestions regarding safety provisions for the equipment and procedures required to reduce the occurrence of accidental events.

  20. Mistake proofing: changing designs to reduce error

    PubMed Central

    Grout, J R

    2006-01-01

    Mistake proofing uses changes in the physical design of processes to reduce human error. It can be used to change designs in ways that prevent errors from occurring, to detect errors after they occur but before harm occurs, to allow processes to fail safely, or to alter the work environment to reduce the chance of errors. Effective mistake proofing design changes should initially be effective in reducing harm, be inexpensive, and easily implemented. Over time these design changes should make life easier and speed up the process. Ideally, the design changes should increase patients' and visitors' understanding of the process. These designs should themselves be mistake proofed and follow the good design practices of other disciplines. PMID:17142609

  1. Technological Advancements and Error Rates in Radiation Therapy Delivery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Margalit, Danielle N., E-mail: dmargalit@partners.org; Harvard Cancer Consortium and Brigham and Women's Hospital/Dana Farber Cancer Institute, Boston, MA; Chen, Yu-Hui

    2011-11-15

    Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)-conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system atmore » Brigham and Women's Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher's exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01-0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08-0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique

  2. Previous Estimates of Mitochondrial DNA Mutation Level Variance Did Not Account for Sampling Error: Comparing the mtDNA Genetic Bottleneck in Mice and Humans

    PubMed Central

    Wonnapinij, Passorn; Chinnery, Patrick F.; Samuels, David C.

    2010-01-01

    In cases of inherited pathogenic mitochondrial DNA (mtDNA) mutations, a mother and her offspring generally have large and seemingly random differences in the amount of mutated mtDNA that they carry. Comparisons of measured mtDNA mutation level variance values have become an important issue in determining the mechanisms that cause these large random shifts in mutation level. These variance measurements have been made with samples of quite modest size, which should be a source of concern because higher-order statistics, such as variance, are poorly estimated from small sample sizes. We have developed an analysis of the standard error of variance from a sample of size n, and we have defined error bars for variance measurements based on this standard error. We calculate variance error bars for several published sets of measurements of mtDNA mutation level variance and show how the addition of the error bars alters the interpretation of these experimental results. We compare variance measurements from human clinical data and from mouse models and show that the mutation level variance is clearly higher in the human data than it is in the mouse models at both the primary oocyte and offspring stages of inheritance. We discuss how the standard error of variance can be used in the design of experiments measuring mtDNA mutation level variance. Our results show that variance measurements based on fewer than 20 measurements are generally unreliable and ideally more than 50 measurements are required to reliably compare variances with less than a 2-fold difference. PMID:20362273

  3. Error reduction in EMG signal decomposition

    PubMed Central

    Kline, Joshua C.

    2014-01-01

    Decomposition of the electromyographic (EMG) signal into constituent action potentials and the identification of individual firing instances of each motor unit in the presence of ambient noise are inherently probabilistic processes, whether performed manually or with automated algorithms. Consequently, they are subject to errors. We set out to classify and reduce these errors by analyzing 1,061 motor-unit action-potential trains (MUAPTs), obtained by decomposing surface EMG (sEMG) signals recorded during human voluntary contractions. Decomposition errors were classified into two general categories: location errors representing variability in the temporal localization of each motor-unit firing instance and identification errors consisting of falsely detected or missed firing instances. To mitigate these errors, we developed an error-reduction algorithm that combines multiple decomposition estimates to determine a more probable estimate of motor-unit firing instances with fewer errors. The performance of the algorithm is governed by a trade-off between the yield of MUAPTs obtained above a given accuracy level and the time required to perform the decomposition. When applied to a set of sEMG signals synthesized from real MUAPTs, the identification error was reduced by an average of 1.78%, improving the accuracy to 97.0%, and the location error was reduced by an average of 1.66 ms. The error-reduction algorithm in this study is not limited to any specific decomposition strategy. Rather, we propose it be used for other decomposition methods, especially when analyzing precise motor-unit firing instances, as occurs when measuring synchronization. PMID:25210159

  4. CONCLUSIONS AND RECOMMENDATIONS OF THE EXPERT PANEL: TECHNICAL WORKSHOP ON HUMAN MILK SURVEILLANCE AND BIOMONITORING FOR ENVIRONMENTAL CHEMICALS IN THE UNITED STATES

    EPA Science Inventory

    The Technical Workshop focused on questions related to interpretation of information gathered from human milk biomonitoring studies. Biomonitoring can measure a person’s exposure to a chemical in his/her tissue. Human milk is a unique biological matrix for biomonitoring because i...

  5. Simulation-based ureteroscopy skills training curriculum with integration of technical and non-technical skills: a randomised controlled trial.

    PubMed

    Brunckhorst, Oliver; Shahid, Shahab; Aydin, Abdullatif; McIlhenny, Craig; Khan, Shahid; Raza, Syed Johar; Sahai, Arun; Brewin, James; Bello, Fernando; Kneebone, Roger; Khan, Muhammad Shamim; Dasgupta, Prokar; Ahmed, Kamran

    2015-09-01

    Current training modalities within ureteroscopy have been extensively validated and must now be integrated within a comprehensive curriculum. Additionally, non-technical skills often cause surgical error and little research has been conducted to combine this with technical skills teaching. This study therefore aimed to develop and validate a curriculum for semi-rigid ureteroscopy, integrating both technical and non-technical skills teaching within the programme. Delphi methodology was utilised for curriculum development and content validation, with a randomised trial then conducted (n = 32) for curriculum evaluation. The developed curriculum consisted of four modules; initially developing basic technical skills and subsequently integrating non-technical skills teaching. Sixteen participants underwent the simulation-based curriculum and were subsequently assessed, together with the control cohort (n = 16) within a full immersion environment. Both technical (Time to completion, OSATS and a task specific checklist) and non-technical (NOTSS) outcome measures were recorded with parametric and non-parametric analyses used depending on the distribution of our data as evaluated by a Shapiro-Wilk test. Improvements within the intervention cohort demonstrated educational value across all technical and non-technical parameters recorded, including time to completion (p < 0.01), OSATS scores (p < 0.001), task specific checklist scores (p = 0.011) and NOTSS scores (p < 0.001). Content validity, feasibility and acceptability were all demonstrated through curriculum development and post-study questionnaire results. The current developed curriculum demonstrates that integrating both technical and non-technical skills teaching is both educationally valuable and feasible. Additionally, the curriculum offers a validated simulation-based training modality within ureteroscopy and a framework for the development of other simulation-based programmes.

  6. Applying human rights to maternal health: UN Technical Guidance on rights-based approaches.

    PubMed

    Yamin, Alicia Ely

    2013-05-01

    In the last few years there have been several critical milestones in acknowledging the centrality of human rights to sustainably addressing the scourge of maternal death and morbidity around the world, including from the United Nations Human Rights Council. In 2012, the Council adopted a resolution welcoming a Technical Guidance on rights-based approaches to maternal mortality and morbidity, and calling for a report on its implementation in 2 years. The present paper provides an overview of the contents and significance of the Guidance. It reviews how the Guidance can assist policymakers in improving women's health and their enjoyment of rights by setting out the implications of adopting a human rights-based approach at each step of the policy cycle, from planning and budgeting, to ensuring implementation, to monitoring and evaluation, to fostering accountability mechanisms. The Guidance should also prove useful to clinicians in understanding rights frameworks as applied to maternal health. Copyright © 2013 International Federation of Gynecology and Obstetrics. Published by Elsevier Ireland Ltd. All rights reserved.

  7. Comprehensive analysis of a medication dosing error related to CPOE.

    PubMed

    Horsky, Jan; Kuperman, Gilad J; Patel, Vimla L

    2005-01-01

    This case study of a serious medication error demonstrates the necessity of a comprehensive methodology for the analysis of failures in interaction between humans and information systems. The authors used a novel approach to analyze a dosing error related to computer-based ordering of potassium chloride (KCl). The method included a chronological reconstruction of events and their interdependencies from provider order entry usage logs, semistructured interviews with involved clinicians, and interface usability inspection of the ordering system. Information collected from all sources was compared and evaluated to understand how the error evolved and propagated through the system. In this case, the error was the product of faults in interaction among human and system agents that methods limited in scope to their distinct analytical domains would not identify. The authors characterized errors in several converging aspects of the drug ordering process: confusing on-screen laboratory results review, system usability difficulties, user training problems, and suboptimal clinical system safeguards that all contributed to a serious dosing error. The results of the authors' analysis were used to formulate specific recommendations for interface layout and functionality modifications, suggest new user alerts, propose changes to user training, and address error-prone steps of the KCl ordering process to reduce the risk of future medication dosing errors.

  8. Distinct prediction errors in mesostriatal circuits of the human brain mediate learning about the values of both states and actions: evidence from high-resolution fMRI.

    PubMed

    Colas, Jaron T; Pauli, Wolfgang M; Larsen, Tobias; Tyszka, J Michael; O'Doherty, John P

    2017-10-01

    Prediction-error signals consistent with formal models of "reinforcement learning" (RL) have repeatedly been found within dopaminergic nuclei of the midbrain and dopaminoceptive areas of the striatum. However, the precise form of the RL algorithms implemented in the human brain is not yet well determined. Here, we created a novel paradigm optimized to dissociate the subtypes of reward-prediction errors that function as the key computational signatures of two distinct classes of RL models-namely, "actor/critic" models and action-value-learning models (e.g., the Q-learning model). The state-value-prediction error (SVPE), which is independent of actions, is a hallmark of the actor/critic architecture, whereas the action-value-prediction error (AVPE) is the distinguishing feature of action-value-learning algorithms. To test for the presence of these prediction-error signals in the brain, we scanned human participants with a high-resolution functional magnetic-resonance imaging (fMRI) protocol optimized to enable measurement of neural activity in the dopaminergic midbrain as well as the striatal areas to which it projects. In keeping with the actor/critic model, the SVPE signal was detected in the substantia nigra. The SVPE was also clearly present in both the ventral striatum and the dorsal striatum. However, alongside these purely state-value-based computations we also found evidence for AVPE signals throughout the striatum. These high-resolution fMRI findings suggest that model-free aspects of reward learning in humans can be explained algorithmically with RL in terms of an actor/critic mechanism operating in parallel with a system for more direct action-value learning.

  9. Research on calibration error of carrier phase against antenna arraying

    NASA Astrophysics Data System (ADS)

    Sun, Ke; Hou, Xiaomin

    2016-11-01

    It is the technical difficulty of uplink antenna arraying that signals from various quarters can not be automatically aligned at the target in deep space. The size of the far-field power combining gain is directly determined by the accuracy of carrier phase calibration. It is necessary to analyze the entire arraying system in order to improve the accuracy of the phase calibration. This paper analyzes the factors affecting the calibration error of carrier phase of uplink antenna arraying system including the error of phase measurement and equipment, the error of the uplink channel phase shift, the position error of ground antenna, calibration receiver and target spacecraft, the error of the atmospheric turbulence disturbance. Discuss the spatial and temporal autocorrelation model of atmospheric disturbances. Each antenna of the uplink antenna arraying is no common reference signal for continuous calibration. So it must be a system of the periodic calibration. Calibration is refered to communication of one or more spacecrafts in a certain period. Because the deep space targets are not automatically aligned to multiplexing received signal. Therefore the aligned signal should be done in advance on the ground. Data is shown that the error can be controlled within the range of demand by the use of existing technology to meet the accuracy of carrier phase calibration. The total error can be controlled within a reasonable range.

  10. Physician's error: medical or legal concept?

    PubMed

    Mujovic-Zornic, Hajrija M

    2010-06-01

    This article deals with the common term of different physician's errors that often happen in daily practice of health care. Author begins with the term of medical malpractice, defined broadly as practice of unjustified acts or failures to act upon the part of a physician or other health care professionals, which results in harm to the patient. It is a common term that includes many types of medical errors, especially physician's errors. The author also discusses the concept of physician's error in particular, which is understood no more in traditional way only as classic error in acting something manually wrong without necessary skills (medical concept), but as an error which violates patient's basic rights and which has its final legal consequence (legal concept). In every case the essential element of liability is to establish this error as a breach of the physician's duty. The first point to note is that the standard of procedure and the standard of due care against which the physician will be judged is not going to be that of the ordinary reasonable man who enjoys no medical expertise. The court's decision should give finale answer and legal qualification in each concrete case. The author's conclusion is that higher protection of human rights in the area of health equaly demands broader concept of physician's error with the accent to its legal subject matter.

  11. Managing Errors to Reduce Accidents in High Consequence Networked Information Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ganter, J.H.

    1999-02-01

    Computers have always helped to amplify and propagate errors made by people. The emergence of Networked Information Systems (NISs), which allow people and systems to quickly interact worldwide, has made understanding and minimizing human error more critical. This paper applies concepts from system safety to analyze how hazards (from hackers to power disruptions) penetrate NIS defenses (e.g., firewalls and operating systems) to cause accidents. Such events usually result from both active, easily identified failures and more subtle latent conditions that have resided in the system for long periods. Both active failures and latent conditions result from human errors. We classifymore » these into several types (slips, lapses, mistakes, etc.) and provide NIS examples of how they occur. Next we examine error minimization throughout the NIS lifecycle, from design through operation to reengineering. At each stage, steps can be taken to minimize the occurrence and effects of human errors. These include defensive design philosophies, architectural patterns to guide developers, and collaborative design that incorporates operational experiences and surprises into design efforts. We conclude by looking at three aspects of NISs that will cause continuing challenges in error and accident management: immaturity of the industry, limited risk perception, and resource tradeoffs.« less

  12. Modeling coherent errors in quantum error correction

    NASA Astrophysics Data System (ADS)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  13. 77 FR 32400 - Fenamidone; Pesticide Tolerance; Technical Amendment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-01

    ...., of the preamble, the text correctly listed the tolerance level for the commodity ``strawberry'' at 0... the tolerance level for ``strawberry'' at 0.15. This technical amendment corrects that error. III. Why... revising the entry for ``Strawberry'' in paragraph (d) to read as follows: Sec. [emsp14]180.579 Fenamidone...

  14. Errors in causal inference: an organizational schema for systematic error and random error.

    PubMed

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Understanding reliance on automation: effects of error type, error distribution, age and experience

    PubMed Central

    Sanchez, Julian; Rogers, Wendy A.; Fisk, Arthur D.; Rovira, Ericka

    2015-01-01

    An obstacle detection task supported by “imperfect” automation was used with the goal of understanding the effects of automation error types and age on automation reliance. Sixty younger and sixty older adults interacted with a multi-task simulation of an agricultural vehicle (i.e. a virtual harvesting combine). The simulator included an obstacle detection task and a fully manual tracking task. A micro-level analysis provided insight into the way reliance patterns change over time. The results indicated that there are distinct patterns of reliance that develop as a function of error type. A prevalence of automation false alarms led participants to under-rely on the automation during alarm states while over relying on it during non-alarms states. Conversely, a prevalence of automation misses led participants to over-rely on automated alarms and under-rely on the automation during non-alarm states. Older adults adjusted their behavior according to the characteristics of the automation similarly to younger adults, although it took them longer to do so. The results of this study suggest the relationship between automation reliability and reliance depends on the prevalence of specific errors and on the state of the system. Understanding the effects of automation detection criterion settings on human-automation interaction can help designers of automated systems make predictions about human behavior and system performance as a function of the characteristics of the automation. PMID:25642142

  16. Analysis of human factors effects on the safety of transporting radioactive waste materials: Technical report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abkowitz, M.D.; Abkowitz, S.B.; Lepofsky, M.

    1989-04-01

    This report examines the extent of human factors effects on the safety of transporting radioactive waste materials. It is seen principally as a scoping effort, to establish whether there is a need for DOE to undertake a more formal approach to studying human factors in radioactive waste transport, and if so, logical directions for that program to follow. Human factors effects are evaluated on driving and loading/transfer operations only. Particular emphasis is placed on the driving function, examining the relationship between human error and safety as it relates to the impairment of driver performance. Although multi-modal in focus, the widespreadmore » availability of data and previous literature on truck operations resulted in a primary study focus on the trucking mode from the standpoint of policy development. In addition to the analysis of human factors accident statistics, the report provides relevant background material on several policies that have been instituted or are under consideration, directed at improving human reliability in the transport sector. On the basis of reported findings, preliminary policy areas are identified. 71 refs., 26 figs., 5 tabs.« less

  17. Inborn errors of metabolism and the human interactome: a systems medicine approach.

    PubMed

    Woidy, Mathias; Muntau, Ania C; Gersting, Søren W

    2018-02-05

    The group of inborn errors of metabolism (IEM) displays a marked heterogeneity and IEM can affect virtually all functions and organs of the human organism; however, IEM share that their associated proteins function in metabolism. Most proteins carry out cellular functions by interacting with other proteins, and thus are organized in biological networks. Therefore, diseases are rarely the consequence of single gene mutations but of the perturbations caused in the related cellular network. Systematic approaches that integrate multi-omics and database information into biological networks have successfully expanded our knowledge of complex disorders but network-based strategies have been rarely applied to study IEM. We analyzed IEM on a proteome scale and found that IEM-associated proteins are organized as a network of linked modules within the human interactome of protein interactions, the IEM interactome. Certain IEM disease groups formed self-contained disease modules, which were highly interlinked. On the other hand, we observed disease modules consisting of proteins from many different disease groups in the IEM interactome. Moreover, we explored the overlap between IEM and non-IEM disease genes and applied network medicine approaches to investigate shared biological pathways, clinical signs and symptoms, and links to drug targets. The provided resources may help to elucidate the molecular mechanisms underlying new IEM, to uncover the significance of disease-associated mutations, to identify new biomarkers, and to develop novel therapeutic strategies.

  18. NASA Technical Standards Program and Implications for Lessons Learned and Technical Standard Integration

    NASA Technical Reports Server (NTRS)

    Gill, Paul S.; Garcia, Danny; Vaughan, William W.; Parker, Nelson C. (Technical Monitor)

    2002-01-01

    The National Aeronautics and Space Agency consists of fourteen Facilities throughout the United States. They are organized to support the Agency's principal Enterprises: (1) Space Science, (2) Earth Science, (3) Aerospace Technology, (4) Human Exploration and Development of Space, and (5) Biological and Physical Research. Technical Standards are important to the activities of each Enterprise and have been an integral part in the development and operation of NASA Programs and Projects since the Agency was established in 1959. However, for years each Center was responsible for its own standards development and selection of non-NASA technical standards that met the needs of Programs and Projects for which they were responsible. There were few Agencywide applicable Technical Standards, mainly those in area of safety. Department of Defense Standards and Specifications were the foundation and main source for Technical Standards used by the Agency. This process existed until about 1997 when NASA embarked on a Program to convert NASA's Center-developed Technical Standards into Agencywide endorsed NASA Preferred Technical Standards. In addition, action was taken regarding the formal adoption of non-NASA Technical Standards (DOD, SAE, ASTM, ASME, IEEE, etc.) as NASA Preferred Technical Standards.

  19. Does teaching non-technical skills to medical students improve those skills and simulated patient outcome?

    PubMed

    Hagemann, Vera; Herbstreit, Frank; Kehren, Clemens; Chittamadathil, Jilson; Wolfertz, Sandra; Dirkmann, Daniel; Kluge, Annette; Peters, Jürgen

    2017-03-29

    The purpose of this study is to evaluate the effects of a tailor-made, non-technical skills seminar on medical student's behaviour, attitudes, and performance during simulated patient treatment. Seventy-seven students were randomized to either a non-technical skills seminar (NTS group, n=43) or a medical seminar (control group, n=34). The human patient simulation was used as an evaluation tool. Before the seminars, all students performed the same simulated emergency scenario to provide baseline measurements. After the seminars, all students were exposed to a second scenario, and behavioural markers for evaluating their non-technical skills were rated. Furthermore, teamwork-relevant attitudes were measured before and after the scenarios, and perceived stress was measured following each simulation. All simulations were also evaluated for various medical endpoints. Non-technical skills concerning situation awareness (p<.01, r=0.5) and teamwork (p<.01, r=0.45) improved from simulation I to II in the NTS group. Decision making improved in both groups (NTS: p<.01, r=0.39; control: p<.01, r=0.46). The attitude 'handling errors' improved significantly in the NTS group (p<.05, r=0.34). Perceived stress decreased from simulation I to II in both groups. Medical endpoints and patients´ outcome did not differ significantly between the groups in simulation II. This study highlights the effectiveness of a single brief seminar on non-technical skills to improve student's non-technical skills. In a next step, to improve student's handling of emergencies and patient outcomes, non-technical skills seminars should be accompanied by exercises and more broadly embedded in the medical school curriculum.

  20. Medication errors in anesthesia: unacceptable or unavoidable?

    PubMed

    Dhawan, Ira; Tewari, Anurag; Sehgal, Sankalp; Sinha, Ashish Chandra

    Medication errors are the common causes of patient morbidity and mortality. It adds financial burden to the institution as well. Though the impact varies from no harm to serious adverse effects including death, it needs attention on priority basis since medication errors' are preventable. In today's world where people are aware and medical claims are on the hike, it is of utmost priority that we curb this issue. Individual effort to decrease medication error alone might not be successful until a change in the existing protocols and system is incorporated. Often drug errors that occur cannot be reversed. The best way to 'treat' drug errors is to prevent them. Wrong medication (due to syringe swap), overdose (due to misunderstanding or preconception of the dose, pump misuse and dilution error), incorrect administration route, under dosing and omission are common causes of medication error that occur perioperatively. Drug omission and calculation mistakes occur commonly in ICU. Medication errors can occur perioperatively either during preparation, administration or record keeping. Numerous human and system errors can be blamed for occurrence of medication errors. The need of the hour is to stop the blame - game, accept mistakes and develop a safe and 'just' culture in order to prevent medication errors. The newly devised systems like VEINROM, a fluid delivery system is a novel approach in preventing drug errors due to most commonly used medications in anesthesia. Similar developments along with vigilant doctors, safe workplace culture and organizational support all together can help prevent these errors. Copyright © 2016. Published by Elsevier Editora Ltda.

  1. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...

  2. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ....102 Public Welfare Department of Health and Human Services GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...

  3. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...

  4. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...

  5. Overview of medical errors and adverse events

    PubMed Central

    2012-01-01

    Safety is a global concept that encompasses efficiency, security of care, reactivity of caregivers, and satisfaction of patients and relatives. Patient safety has emerged as a major target for healthcare improvement. Quality assurance is a complex task, and patients in the intensive care unit (ICU) are more likely than other hospitalized patients to experience medical errors, due to the complexity of their conditions, need for urgent interventions, and considerable workload fluctuation. Medication errors are the most common medical errors and can induce adverse events. Two approaches are available for evaluating and improving quality-of-care: the room-for-improvement model, in which problems are identified, plans are made to resolve them, and the results of the plans are measured; and the monitoring model, in which quality indicators are defined as relevant to potential problems and then monitored periodically. Indicators that reflect structures, processes, or outcomes have been developed by medical societies. Surveillance of these indicators is organized at the hospital or national level. Using a combination of methods improves the results. Errors are caused by combinations of human factors and system factors, and information must be obtained on how people make errors in the ICU environment. Preventive strategies are more likely to be effective if they rely on a system-based approach, in which organizational flaws are remedied, rather than a human-based approach of encouraging people not to make errors. The development of a safety culture in the ICU is crucial to effective prevention and should occur before the evaluation of safety programs, which are more likely to be effective when they involve bundles of measures. PMID:22339769

  6. Exponential error reduction in pretransfusion testing with automation.

    PubMed

    South, Susan F; Casina, Tony S; Li, Lily

    2012-08-01

    Protecting the safety of blood transfusion is the top priority of transfusion service laboratories. Pretransfusion testing is a critical element of the entire transfusion process to enhance vein-to-vein safety. Human error associated with manual pretransfusion testing is a cause of transfusion-related mortality and morbidity and most human errors can be eliminated by automated systems. However, the uptake of automation in transfusion services has been slow and many transfusion service laboratories around the world still use manual blood group and antibody screen (G&S) methods. The goal of this study was to compare error potentials of commonly used manual (e.g., tiles and tubes) versus automated (e.g., ID-GelStation and AutoVue Innova) G&S methods. Routine G&S processes in seven transfusion service laboratories (four with manual and three with automated G&S methods) were analyzed using failure modes and effects analysis to evaluate the corresponding error potentials of each method. Manual methods contained a higher number of process steps ranging from 22 to 39, while automated G&S methods only contained six to eight steps. Corresponding to the number of the process steps that required human interactions, the risk priority number (RPN) of the manual methods ranged from 5304 to 10,976. In contrast, the RPN of the automated methods was between 129 and 436 and also demonstrated a 90% to 98% reduction of the defect opportunities in routine G&S testing. This study provided quantitative evidence on how automation could transform pretransfusion testing processes by dramatically reducing error potentials and thus would improve the safety of blood transfusion. © 2012 American Association of Blood Banks.

  7. Distinct prediction errors in mesostriatal circuits of the human brain mediate learning about the values of both states and actions: evidence from high-resolution fMRI

    PubMed Central

    Pauli, Wolfgang M.; Larsen, Tobias; Tyszka, J. Michael; O’Doherty, John P.

    2017-01-01

    Prediction-error signals consistent with formal models of “reinforcement learning” (RL) have repeatedly been found within dopaminergic nuclei of the midbrain and dopaminoceptive areas of the striatum. However, the precise form of the RL algorithms implemented in the human brain is not yet well determined. Here, we created a novel paradigm optimized to dissociate the subtypes of reward-prediction errors that function as the key computational signatures of two distinct classes of RL models—namely, “actor/critic” models and action-value-learning models (e.g., the Q-learning model). The state-value-prediction error (SVPE), which is independent of actions, is a hallmark of the actor/critic architecture, whereas the action-value-prediction error (AVPE) is the distinguishing feature of action-value-learning algorithms. To test for the presence of these prediction-error signals in the brain, we scanned human participants with a high-resolution functional magnetic-resonance imaging (fMRI) protocol optimized to enable measurement of neural activity in the dopaminergic midbrain as well as the striatal areas to which it projects. In keeping with the actor/critic model, the SVPE signal was detected in the substantia nigra. The SVPE was also clearly present in both the ventral striatum and the dorsal striatum. However, alongside these purely state-value-based computations we also found evidence for AVPE signals throughout the striatum. These high-resolution fMRI findings suggest that model-free aspects of reward learning in humans can be explained algorithmically with RL in terms of an actor/critic mechanism operating in parallel with a system for more direct action-value learning. PMID:29049406

  8. Increased instrument intelligence--can it reduce laboratory error?

    PubMed

    Jekelis, Albert W

    2005-01-01

    Recent literature has focused on the reduction of laboratory errors and the potential impact on patient management. This study assessed the intelligent, automated preanalytical process-control abilities in newer generation analyzers as compared with older analyzers and the impact on error reduction. Three generations of immuno-chemistry analyzers were challenged with pooled human serum samples for a 3-week period. One of the three analyzers had an intelligent process of fluidics checks, including bubble detection. Bubbles can cause erroneous results due to incomplete sample aspiration. This variable was chosen because it is the most easily controlled sample defect that can be introduced. Traditionally, lab technicians have had to visually inspect each sample for the presence of bubbles. This is time consuming and introduces the possibility of human error. Instruments with bubble detection may be able to eliminate the human factor and reduce errors associated with the presence of bubbles. Specific samples were vortexed daily to introduce a visible quantity of bubbles, then immediately placed in the daily run. Errors were defined as a reported result greater than three standard deviations below the mean and associated with incomplete sample aspiration of the analyte of the individual analyzer Three standard deviations represented the target limits of proficiency testing. The results of the assays were examined for accuracy and precision. Efficiency, measured as process throughput, was also measured to associate a cost factor and potential impact of the error detection on the overall process. The analyzer performance stratified according to their level of internal process control The older analyzers without bubble detection reported 23 erred results. The newest analyzer with bubble detection reported one specimen incorrectly. The precision and accuracy of the nonvortexed specimens were excellent and acceptable for all three analyzers. No errors were found in the

  9. 25 CFR 23.42 - Technical assistance.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false Technical assistance. 23.42 Section 23.42 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR HUMAN SERVICES INDIAN CHILD WELFARE ACT General and Uniform Grant Administration Provisions and Requirements § 23.42 Technical assistance. (a) Pre-award and...

  10. Error studies of Halbach Magnets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brooks, S.

    2017-03-02

    These error studies were done on the Halbach magnets for the CBETA “First Girder” as described in note [CBETA001]. The CBETA magnets have since changed slightly to the lattice in [CBETA009]. However, this is not a large enough change to significantly affect the results here. The QF and BD arc FFAG magnets are considered. For each assumed set of error distributions and each ideal magnet, 100 random magnets with errors are generated. These are then run through an automated version of the iron wire multipole cancellation algorithm. The maximum wire diameter allowed is 0.063” as in the proof-of-principle magnets. Initially,more » 32 wires (2 per Halbach wedge) are tried, then if this does not achieve 1e-­4 level accuracy in the simulation, 48 and then 64 wires. By “1e-4 accuracy”, it is meant the FOM defined by √(Σ n≥sextupole a n 2+b n 2) is less than 1 unit, where the multipoles are taken at the maximum nominal beam radius, R=23mm for these magnets. The algorithm initially uses 20 convergence interations. If 64 wires does not achieve 1e-­4 accuracy, this is increased to 50 iterations to check for slow converging cases. There are also classifications for magnets that do not achieve 1e-4 but do achieve 1e-3 (FOM ≤ 10 units). This is technically within the spec discussed in the Jan 30, 2017 review; however, there will be errors in practical shimming not dealt with in the simulation, so it is preferable to do much better than the spec in the simulation.« less

  11. Human Error and Commercial Aviation Accidents: A Comprehensive, Fine-Grained Analysis Using HFACS

    DTIC Science & Technology

    2006-07-01

    Factors Figure 2. The HFACS framework. 3 practiced and seemingly automatic behaviors is that they are particularly susceptible to attention and/or memory...been included in most error frameworks, the third and final error form, perceptual errors, has received comparatively less attention . No less...operate safely. After all, just as not everyone can play linebacker for their favorite professional football team or be a concert pianist , not

  12. Inborn Errors in Immunity

    PubMed Central

    Lionakis, M.S.; Hajishengallis, G.

    2015-01-01

    In recent years, the study of genetic defects arising from inborn errors in immunity has resulted in the discovery of new genes involved in the function of the immune system and in the elucidation of the roles of known genes whose importance was previously unappreciated. With the recent explosion in the field of genomics and the increasing number of genetic defects identified, the study of naturally occurring mutations has become a powerful tool for gaining mechanistic insight into the functions of the human immune system. In this concise perspective, we discuss emerging evidence that inborn errors in immunity constitute real-life models that are indispensable both for the in-depth understanding of human biology and for obtaining critical insights into common diseases, such as those affecting oral health. In the field of oral mucosal immunity, through the study of patients with select gene disruptions, the interleukin-17 (IL-17) pathway has emerged as a critical element in oral immune surveillance and susceptibility to inflammatory disease, with disruptions in the IL-17 axis now strongly linked to mucosal fungal susceptibility, whereas overactivation of the same pathways is linked to inflammatory periodontitis. PMID:25900229

  13. Cognitive errors: thinking clearly when it could be child maltreatment.

    PubMed

    Laskey, Antoinette L

    2014-10-01

    Cognitive errors have been studied in a broad array of fields, including medicine. The more that is understood about how the human mind processes complex information, the more it becomes clear that certain situations are particularly susceptible to less than optimal outcomes because of these errors. This article explores how some of the known cognitive errors may influence the diagnosis of child abuse, resulting in both false-negative and false-positive diagnoses. Suggested remedies for these errors are offered. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Quantitative evaluation for accumulative calibration error and video-CT registration errors in electromagnetic-tracked endoscopy.

    PubMed

    Liu, Sheena Xin; Gutiérrez, Luis F; Stanton, Doug

    2011-05-01

    calibration method and a virtual navigation evaluation system for quantifying the overall errors of the intra-operative data integration. We believe this phantom not only offers us good insights to understand the systematic errors encountered in all phases of an EM-tracked endoscopy procedure but also can provide quality control of laboratory experiments for endoscopic procedures before the experiments are transferred from the laboratory to human subjects.

  15. Nature of the Refractive Errors in Rhesus Monkeys (Macaca mulatta) with Experimentally Induced Ametropias

    PubMed Central

    Qiao-Grider, Ying; Hung, Li-Fang; Kee, Chea-su; Ramamirtham, Ramkumar; Smith, Earl L.

    2010-01-01

    We analyzed the contribution of individual ocular components to vision-induced ametropias in 210 rhesus monkeys. The primary contribution to refractive-error development came from vitreous chamber depth; a minor contribution from corneal power was also detected. However, there was no systematic relationship between refractive error and anterior chamber depth or between refractive error and any crystalline lens parameter. Our results are in good agreement with previous studies in humans, suggesting that the refractive errors commonly observed in humans are created by vision-dependent mechanisms that are similar to those operating in monkeys. This concordance emphasizes the applicability of rhesus monkeys in refractive-error studies. PMID:20600237

  16. A nucleotide-analogue-induced gain of function corrects the error-prone nature of human DNA polymerase iota.

    PubMed

    Ketkar, Amit; Zafar, Maroof K; Banerjee, Surajit; Marquez, Victor E; Egli, Martin; Eoff, Robert L

    2012-06-27

    Y-family DNA polymerases participate in replication stress and DNA damage tolerance mechanisms. The properties that allow these enzymes to copy past bulky adducts or distorted template DNA can result in a greater propensity for them to make mistakes. Of the four human Y-family members, human DNA polymerase iota (hpol ι) is the most error-prone. In the current study, we elucidate the molecular basis for improving the fidelity of hpol ι through use of the fixed-conformation nucleotide North-methanocarba-2'-deoxyadenosine triphosphate (N-MC-dATP). Three crystal structures were solved of hpol ι in complex with DNA containing a template 2'-deoxythymidine (dT) paired with an incoming dNTP or modified nucleotide triphosphate. The ternary complex of hpol ι inserting N-MC-dATP opposite dT reveals that the adenine ring is stabilized in the anti orientation about the pseudo-glycosyl torsion angle, which mimics precisely the mutagenic arrangement of dGTP:dT normally preferred by hpol ι. The stabilized anti conformation occurs without notable contacts from the protein but likely results from constraints imposed by the bicyclo[3.1.0]hexane scaffold of the modified nucleotide. Unmodified dATP and South-MC-dATP each adopt syn glycosyl orientations to form Hoogsteen base pairs with dT. The Hoogsteen orientation exhibits weaker base-stacking interactions and is less catalytically favorable than anti N-MC-dATP. Thus, N-MC-dATP corrects the error-prone nature of hpol ι by preventing the Hoogsteen base-pairing mode normally observed for hpol ι-catalyzed insertion of dATP opposite dT. These results provide a previously unrecognized means of altering the efficiency and the fidelity of a human translesion DNA polymerase.

  17. A nucleotide analogue induced gain of function corrects the error-prone nature of human DNA polymerase iota

    PubMed Central

    Ketkar, Amit; Zafar, Maroof K.; Banerjee, Surajit; Marquez, Victor E.; Egli, Martin; Eoff, Robert L

    2012-01-01

    Y-family DNA polymerases participate in replication stress and DNA damage tolerance mechanisms. The properties that allow these enzymes to copy past bulky adducts or distorted template DNA can result in a greater propensity for them to make mistakes. Of the four human Y-family members, human DNA polymerase iota (hpol ι) is the most error-prone. In the current study, we elucidate the molecular basis for improving the fidelity of hpol ι through use of the fixed-conformation nucleotide North-methanocarba-2′-deoxyadenosine triphosphate (N-MC-dATP). Three crystal structures were solved of hpol ι in complex with DNA containing a template 2′-deoxythymidine (dT) paired with an incoming dNTP or modified nucleotide triphosphate. The ternary complex of hpol ι inserting N-MC-dATP opposite dT reveals that the adenine ring is stabilized in the anti orientation about the pseudo-glycosyl torsion angle (χ), which mimics precisely the mutagenic arrangement of dGTP:dT normally preferred by hpol ι. The stabilized anti conformation occurs without notable contacts from the protein but likely results from constraints imposed by the bicyclo[3.1.0]hexane scaffold of the modified nucleotide. Unmodified dATP and South-MC-dATP each adopt syn glycosyl orientations to form Hoogsteen base pairs with dT. The Hoogsteen orientation exhibits weaker base stacking interactions and is less catalytically favorable than anti N-MC-dATP. Thus, N-MC-dATP corrects the error-prone nature of hpol ι by preventing the Hoogsteen base-pairing mode normally observed for hpol ι-catalyzed insertion of dATP opposite dT. These results provide a previously unrecognized means of altering the efficiency and the fidelity of a human translesion DNA polymerase. PMID:22632140

  18. Error begat error: design error analysis and prevention in social infrastructure projects.

    PubMed

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated. Copyright © 2011. Published by Elsevier Ltd.

  19. Why do adult dogs (Canis familiaris) commit the A-not-B search error?

    PubMed

    Sümegi, Zsófia; Kis, Anna; Miklósi, Ádám; Topál, József

    2014-02-01

    It has been recently reported that adult domestic dogs, like human infants, tend to commit perseverative search errors; that is, they select the previously rewarded empty location in Piagetian A-not-B search task because of the experimenter's ostensive communicative cues. There is, however, an ongoing debate over whether these findings reveal that dogs can use the human ostensive referential communication as a source of information or the phenomenon can be accounted for by "more simple" explanations like insufficient attention and learning based on local enhancement. In 2 experiments the authors systematically manipulated the type of human cueing (communicative or noncommunicative) adjacent to the A hiding place during both the A and B trials. Results highlight 3 important aspects of the dogs' A-not-B error: (a) search errors are influenced to a certain extent by dogs' motivation to retrieve the toy object; (b) human communicative and noncommunicative signals have different error-inducing effects; and (3) communicative signals presented at the A hiding place during the B trials but not during the A trials play a crucial role in inducing the A-not-B error and it can be induced even without demonstrating repeated hiding events at location A. These findings further confirm the notion that perseverative search error, at least partially, reflects a "ready-to-obey" attitude in the dog rather than insufficient attention and/or working memory.

  20. Final Rule for Technical Amendments to the Highway and Nonroad Diesel Regulations

    EPA Pesticide Factsheets

    This action corrects errors and omissions from the previous rules, makes minor changes to the regulations to assist entities with regulatory compliance, and makes technical amendments that resulted from discussions with various diesel stakeholders.

  1. Intravenous Chemotherapy Compounding Errors in a Follow-Up Pan-Canadian Observational Study.

    PubMed

    Gilbert, Rachel E; Kozak, Melissa C; Dobish, Roxanne B; Bourrier, Venetia C; Koke, Paul M; Kukreti, Vishal; Logan, Heather A; Easty, Anthony C; Trbovich, Patricia L

    2018-05-01

    Intravenous (IV) compounding safety has garnered recent attention as a result of high-profile incidents, awareness efforts from the safety community, and increasingly stringent practice standards. New research with more-sensitive error detection techniques continues to reinforce that error rates with manual IV compounding are unacceptably high. In 2014, our team published an observational study that described three types of previously unrecognized and potentially catastrophic latent chemotherapy preparation errors in Canadian oncology pharmacies that would otherwise be undetectable. We expand on this research and explore whether additional potential human failures are yet to be addressed by practice standards. Field observations were conducted in four cancer center pharmacies in four Canadian provinces from January 2013 to February 2015. Human factors specialists observed and interviewed pharmacy managers, oncology pharmacists, pharmacy technicians, and pharmacy assistants as they carried out their work. Emphasis was on latent errors (potential human failures) that could lead to outcomes such as wrong drug, dose, or diluent. Given the relatively short observational period, no active failures or actual errors were observed. However, 11 latent errors in chemotherapy compounding were identified. In terms of severity, all 11 errors create the potential for a patient to receive the wrong drug or dose, which in the context of cancer care, could lead to death or permanent loss of function. Three of the 11 practices were observed in our previous study, but eight were new. Applicable Canadian and international standards and guidelines do not explicitly address many of the potentially error-prone practices observed. We observed a significant degree of risk for error in manual mixing practice. These latent errors may exist in other regions where manual compounding of IV chemotherapy takes place. Continued efforts to advance standards, guidelines, technological innovation, and

  2. Measuring Error Identification and Recovery Skills in Surgical Residents.

    PubMed

    Sternbach, Joel M; Wang, Kevin; El Khoury, Rym; Teitelbaum, Ezra N; Meyerson, Shari L

    2017-02-01

    Although error identification and recovery skills are essential for the safe practice of surgery, they have not traditionally been taught or evaluated in residency training. This study validates a method for assessing error identification and recovery skills in surgical residents using a thoracoscopic lobectomy simulator. We developed a 5-station, simulator-based examination containing the most commonly encountered cognitive and technical errors occurring during division of the superior pulmonary vein for left upper lobectomy. Successful completion of each station requires identification and correction of these errors. Examinations were video recorded and scored in a blinded fashion using an examination-specific rating instrument evaluating task performance as well as error identification and recovery skills. Evidence of validity was collected in the categories of content, response process, internal structure, and relationship to other variables. Fifteen general surgical residents (9 interns and 6 third-year residents) completed the examination. Interrater reliability was high, with an intraclass correlation coefficient of 0.78 between 4 trained raters. Station scores ranged from 64% to 84% correct. All stations adequately discriminated between high- and low-performing residents, with discrimination ranging from 0.35 to 0.65. The overall examination score was significantly higher for intermediate residents than for interns (mean, 74 versus 64 of 90 possible; p = 0.03). The described simulator-based examination with embedded errors and its accompanying assessment tool can be used to measure error identification and recovery skills in surgical residents. This examination provides a valid method for comparing teaching strategies designed to improve error recognition and recovery to enhance patient safety. Copyright © 2017 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  3. DNA double strand break repair in human bladder cancer is error prone and involves microhomology-associated end-joining

    PubMed Central

    Bentley, Johanne; Diggle, Christine P.; Harnden, Patricia; Knowles, Margaret A.; Kiltie, Anne E.

    2004-01-01

    In human cells DNA double strand breaks (DSBs) can be repaired by the non-homologous end-joining (NHEJ) pathway. In a background of NHEJ deficiency, DSBs with mismatched ends can be joined by an error-prone mechanism involving joining between regions of nucleotide microhomology. The majority of joins formed from a DSB with partially incompatible 3′ overhangs by cell-free extracts from human glioblastoma (MO59K) and urothelial (NHU) cell lines were accurate and produced by the overlap/fill-in of mismatched termini by NHEJ. However, repair of DSBs by extracts using tissue from four high-grade bladder carcinomas resulted in no accurate join formation. Junctions were formed by the non-random deletion of terminal nucleotides and showed a preference for annealing at a microhomology of 8 nt buried within the DNA substrate; this process was not dependent on functional Ku70, DNA-PK or XRCC4. Junctions were repaired in the same manner in MO59K extracts in which accurate NHEJ was inactivated by inhibition of Ku70 or DNA-PKcs. These data indicate that bladder tumour extracts are unable to perform accurate NHEJ such that error-prone joining predominates. Therefore, in high-grade tumours mismatched DSBs are repaired by a highly mutagenic, microhomology-mediated, alternative end-joining pathway, a process that may contribute to genomic instability observed in bladder cancer. PMID:15466592

  4. Error identification and recovery by student nurses using human patient simulation: opportunity to improve patient safety.

    PubMed

    Henneman, Elizabeth A; Roche, Joan P; Fisher, Donald L; Cunningham, Helene; Reilly, Cheryl A; Nathanson, Brian H; Henneman, Philip L

    2010-02-01

    This study examined types of errors that occurred or were recovered in a simulated environment by student nurses. Errors occurred in all four rule-based error categories, and all students committed at least one error. The most frequent errors occurred in the verification category. Another common error was related to physician interactions. The least common errors were related to coordinating information with the patient and family. Our finding that 100% of student subjects committed rule-based errors is cause for concern. To decrease errors and improve safe clinical practice, nurse educators must identify effective strategies that students can use to improve patient surveillance. Copyright 2010 Elsevier Inc. All rights reserved.

  5. Modular error embedding

    DOEpatents

    Sandford, II, Maxwell T.; Handel, Theodore G.; Ettinger, J. Mark

    1999-01-01

    A method of embedding auxiliary information into the digital representation of host data containing noise in the low-order bits. The method applies to digital data representing analog signals, for example digital images. The method reduces the error introduced by other methods that replace the low-order bits with auxiliary information. By a substantially reverse process, the embedded auxiliary data can be retrieved easily by an authorized user through use of a digital key. The modular error embedding method includes a process to permute the order in which the host data values are processed. The method doubles the amount of auxiliary information that can be added to host data values, in comparison with bit-replacement methods for high bit-rate coding. The invention preserves human perception of the meaning and content of the host data, permitting the addition of auxiliary data in the amount of 50% or greater of the original host data.

  6. Nature of the refractive errors in rhesus monkeys (Macaca mulatta) with experimentally induced ametropias.

    PubMed

    Qiao-Grider, Ying; Hung, Li-Fang; Kee, Chea-Su; Ramamirtham, Ramkumar; Smith, Earl L

    2010-08-23

    We analyzed the contribution of individual ocular components to vision-induced ametropias in 210 rhesus monkeys. The primary contribution to refractive-error development came from vitreous chamber depth; a minor contribution from corneal power was also detected. However, there was no systematic relationship between refractive error and anterior chamber depth or between refractive error and any crystalline lens parameter. Our results are in good agreement with previous studies in humans, suggesting that the refractive errors commonly observed in humans are created by vision-dependent mechanisms that are similar to those operating in monkeys. This concordance emphasizes the applicability of rhesus monkeys in refractive-error studies. Copyright 2010 Elsevier Ltd. All rights reserved.

  7. 75 FR 15342 - Advisory Committees; Technical Amendment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-29

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration 21 CFR Part 14 [Docket No. FDA-2010-N-0001] Advisory Committees; Technical Amendment Agency: Food and Drug Administration, HHS. ACTION: Final rule; technical amendment. SUMMARY: The Food and Drug Administration (FDA) is amending its...

  8. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    PubMed

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Handling the Difficulties of Technical School Students in the Construction and Interpretation of Graphic Representations

    ERIC Educational Resources Information Center

    Marinos, Andreas

    2010-01-01

    In this work, an attempt is made to evaluate the errors that have to do with the interpretation and construction of graphic representations. Although the students are studying in the second year of technical high school (secondary education), i.e. in schools with an emphasis in technical subjects (post junior secondary), it is observed that they…

  10. Understanding adverse events: human factors.

    PubMed Central

    Reason, J

    1995-01-01

    (1) Human rather than technical failures now represent the greatest threat to complex and potentially hazardous systems. This includes healthcare systems. (2) Managing the human risks will never be 100% effective. Human fallibility can be moderated, but it cannot be eliminated. (3) Different error types have different underlying mechanisms, occur in different parts of the organisation, and require different methods of risk management. The basic distinctions are between: Slips, lapses, trips, and fumbles (execution failures) and mistakes (planning or problem solving failures). Mistakes are divided into rule based mistakes and knowledge based mistakes. Errors (information-handling problems) and violations (motivational problems) Active versus latent failures. Active failures are committed by those in direct contact with the patient, latent failures arise in organisational and managerial spheres and their adverse effects may take a long time to become evident. (4) Safety significant errors occur at all levels of the system, not just at the sharp end. Decisions made in the upper echelons of the organisation create the conditions in the workplace that subsequently promote individual errors and violations. Latent failures are present long before an accident and are hence prime candidates for principled risk management. (5) Measures that involve sanctions and exhortations (that is, moralistic measures directed to those at the sharp end) have only very limited effectiveness, especially so in the case of highly trained professionals. (6) Human factors problems are a product of a chain of causes in which the individual psychological factors (that is, momentary inattention, forgetting, etc) are the last and least manageable links. Attentional "capture" (preoccupation or distraction) is a necessary condition for the commission of slips and lapses. Yet, its occurrence is almost impossible to predict or control effectively. The same is true of the factors associated with

  11. MEDICAL ERROR: CIVIL AND LEGAL ASPECT.

    PubMed

    Buletsa, S; Drozd, O; Yunin, O; Mohilevskyi, L

    2018-03-01

    The scientific article is focused on the research of the notion of medical error, medical and legal aspects of this notion have been considered. The necessity of the legislative consolidation of the notion of «medical error» and criteria of its legal estimation have been grounded. In the process of writing a scientific article, we used the empirical method, general scientific and comparative legal methods. A comparison of the concept of medical error in civil and legal aspects was made from the point of view of Ukrainian, European and American scientists. It has been marked that the problem of medical errors is known since ancient times and in the whole world, in fact without regard to the level of development of medicine, there is no country, where doctors never make errors. According to the statistics, medical errors in the world are included in the first five reasons of death rate. At the same time the grant of medical services practically concerns all people. As a man and his life, health in Ukraine are acknowledged by a higher social value, medical services must be of high-quality and effective. The grant of not quality medical services causes harm to the health, and sometimes the lives of people; it may result in injury or even death. The right to the health protection is one of the fundamental human rights assured by the Constitution of Ukraine; therefore the issue of medical errors and liability for them is extremely relevant. The authors make conclusions, that the definition of the notion of «medical error» must get the legal consolidation. Besides, the legal estimation of medical errors must be based on the single principles enshrined in the legislation and confirmed by judicial practice.

  12. Inborn Errors of Fructose Metabolism. What Can We Learn from Them?

    PubMed

    Tran, Christel

    2017-04-03

    Fructose is one of the main sweetening agents in the human diet and its ingestion is increasing globally. Dietary sugar has particular effects on those whose capacity to metabolize fructose is limited. If intolerance to carbohydrates is a frequent finding in children, inborn errors of carbohydrate metabolism are rare conditions. Three inborn errors are known in the pathway of fructose metabolism; (1) essential or benign fructosuria due to fructokinase deficiency; (2) hereditary fructose intolerance; and (3) fructose-1,6-bisphosphatase deficiency. In this review the focus is set on the description of the clinical symptoms and biochemical anomalies in the three inborn errors of metabolism. The potential toxic effects of fructose in healthy humans also are discussed. Studies conducted in patients with inborn errors of fructose metabolism helped to understand fructose metabolism and its potential toxicity in healthy human. Influence of fructose on the glycolytic pathway and on purine catabolism is the cause of hypoglycemia, lactic acidosis and hyperuricemia. The discovery that fructose-mediated generation of uric acid may have a causal role in diabetes and obesity provided new understandings into pathogenesis for these frequent diseases.

  13. Inborn Errors of Fructose Metabolism. What Can We Learn from Them?

    PubMed Central

    Tran, Christel

    2017-01-01

    Fructose is one of the main sweetening agents in the human diet and its ingestion is increasing globally. Dietary sugar has particular effects on those whose capacity to metabolize fructose is limited. If intolerance to carbohydrates is a frequent finding in children, inborn errors of carbohydrate metabolism are rare conditions. Three inborn errors are known in the pathway of fructose metabolism; (1) essential or benign fructosuria due to fructokinase deficiency; (2) hereditary fructose intolerance; and (3) fructose-1,6-bisphosphatase deficiency. In this review the focus is set on the description of the clinical symptoms and biochemical anomalies in the three inborn errors of metabolism. The potential toxic effects of fructose in healthy humans also are discussed. Studies conducted in patients with inborn errors of fructose metabolism helped to understand fructose metabolism and its potential toxicity in healthy human. Influence of fructose on the glycolytic pathway and on purine catabolism is the cause of hypoglycemia, lactic acidosis and hyperuricemia. The discovery that fructose-mediated generation of uric acid may have a causal role in diabetes and obesity provided new understandings into pathogenesis for these frequent diseases. PMID:28368361

  14. A Method for Oscillation Errors Restriction of SINS Based on Forecasted Time Series.

    PubMed

    Zhao, Lin; Li, Jiushun; Cheng, Jianhua; Jia, Chun; Wang, Qiufan

    2015-07-17

    Continuity, real-time, and accuracy are the key technical indexes of evaluating comprehensive performance of a strapdown inertial navigation system (SINS). However, Schuler, Foucault, and Earth periodic oscillation errors significantly cut down the real-time accuracy of SINS. A method for oscillation error restriction of SINS based on forecasted time series is proposed by analyzing the characteristics of periodic oscillation errors. The innovative method gains multiple sets of navigation solutions with different phase delays in virtue of the forecasted time series acquired through the measurement data of the inertial measurement unit (IMU). With the help of curve-fitting based on least square method, the forecasted time series is obtained while distinguishing and removing small angular motion interference in the process of initial alignment. Finally, the periodic oscillation errors are restricted on account of the principle of eliminating the periodic oscillation signal with a half-wave delay by mean value. Simulation and test results show that the method has good performance in restricting the Schuler, Foucault, and Earth oscillation errors of SINS.

  15. A Method for Oscillation Errors Restriction of SINS Based on Forecasted Time Series

    PubMed Central

    Zhao, Lin; Li, Jiushun; Cheng, Jianhua; Jia, Chun; Wang, Qiufan

    2015-01-01

    Continuity, real-time, and accuracy are the key technical indexes of evaluating comprehensive performance of a strapdown inertial navigation system (SINS). However, Schuler, Foucault, and Earth periodic oscillation errors significantly cut down the real-time accuracy of SINS. A method for oscillation error restriction of SINS based on forecasted time series is proposed by analyzing the characteristics of periodic oscillation errors. The innovative method gains multiple sets of navigation solutions with different phase delays in virtue of the forecasted time series acquired through the measurement data of the inertial measurement unit (IMU). With the help of curve-fitting based on least square method, the forecasted time series is obtained while distinguishing and removing small angular motion interference in the process of initial alignment. Finally, the periodic oscillation errors are restricted on account of the principle of eliminating the periodic oscillation signal with a half-wave delay by mean value. Simulation and test results show that the method has good performance in restricting the Schuler, Foucault, and Earth oscillation errors of SINS. PMID:26193283

  16. Altitude deviations: Breakdowns of an error-tolerant system

    NASA Technical Reports Server (NTRS)

    Palmer, Everett A.; Hutchins, Edwin L.; Ritter, Richard D.; Vancleemput, Inge

    1993-01-01

    Pilot reports of aviation incidents to the Aviation Safety Reporting System (ASRS) provide a window on the problems occurring in today's airline cockpits. The narratives of 10 pilot reports of errors made in the automation-assisted altitude-change task are used to illustrate some of the issues of pilots interacting with automatic systems. These narratives are then used to construct a description of the cockpit as an information processing system. The analysis concentrates on the error-tolerant properties of the system and on how breakdowns can occasionally occur. An error-tolerant system can detect and correct its internal processing errors. The cockpit system consists of two or three pilots supported by autoflight, flight-management, and alerting systems. These humans and machines have distributed access to clearance information and perform redundant processing of information. Errors can be detected as deviations from either expected behavior or as deviations from expected information. Breakdowns in this system can occur when the checking and cross-checking tasks that give the system its error-tolerant properties are not performed because of distractions or other task demands. Recommendations based on the analysis for improving the error tolerance of the cockpit system are given.

  17. An error analysis perspective for patient alignment systems.

    PubMed

    Figl, Michael; Kaar, Marcus; Hoffman, Rainer; Kratochwil, Alfred; Hummel, Johann

    2013-09-01

    This paper analyses the effects of error sources which can be found in patient alignment systems. As an example, an ultrasound (US) repositioning system and its transformation chain are assessed. The findings of this concept can also be applied to any navigation system. In a first step, all error sources were identified and where applicable, corresponding target registration errors were computed. By applying error propagation calculations on these commonly used registration/calibration and tracking errors, we were able to analyse the components of the overall error. Furthermore, we defined a special situation where the whole registration chain reduces to the error caused by the tracking system. Additionally, we used a phantom to evaluate the errors arising from the image-to-image registration procedure, depending on the image metric used. We have also discussed how this analysis can be applied to other positioning systems such as Cone Beam CT-based systems or Brainlab's ExacTrac. The estimates found by our error propagation analysis are in good agreement with the numbers found in the phantom study but significantly smaller than results from patient evaluations. We probably underestimated human influences such as the US scan head positioning by the operator and tissue deformation. Rotational errors of the tracking system can multiply these errors, depending on the relative position of tracker and probe. We were able to analyse the components of the overall error of a typical patient positioning system. We consider this to be a contribution to the optimization of the positioning accuracy for computer guidance systems.

  18. Learning mechanisms to limit medication administration errors.

    PubMed

    Drach-Zahavy, Anat; Pud, Dorit

    2010-04-01

    This paper is a report of a study conducted to identify and test the effectiveness of learning mechanisms applied by the nursing staff of hospital wards as a means of limiting medication administration errors. Since the influential report ;To Err Is Human', research has emphasized the role of team learning in reducing medication administration errors. Nevertheless, little is known about the mechanisms underlying team learning. Thirty-two hospital wards were randomly recruited. Data were collected during 2006 in Israel by a multi-method (observations, interviews and administrative data), multi-source (head nurses, bedside nurses) approach. Medication administration error was defined as any deviation from procedures, policies and/or best practices for medication administration, and was identified using semi-structured observations of nurses administering medication. Organizational learning was measured using semi-structured interviews with head nurses, and the previous year's reported medication administration errors were assessed using administrative data. The interview data revealed four learning mechanism patterns employed in an attempt to learn from medication administration errors: integrated, non-integrated, supervisory and patchy learning. Regression analysis results demonstrated that whereas the integrated pattern of learning mechanisms was associated with decreased errors, the non-integrated pattern was associated with increased errors. Supervisory and patchy learning mechanisms were not associated with errors. Superior learning mechanisms are those that represent the whole cycle of team learning, are enacted by nurses who administer medications to patients, and emphasize a system approach to data analysis instead of analysis of individual cases.

  19. Reduced discretization error in HZETRN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Slaba, Tony C., E-mail: Tony.C.Slaba@nasa.gov; Blattnig, Steve R., E-mail: Steve.R.Blattnig@nasa.gov; Tweed, John, E-mail: jtweed@odu.edu

    2013-02-01

    The deterministic particle transport code HZETRN is an efficient analysis tool for studying the effects of space radiation on humans, electronics, and shielding materials. In a previous work, numerical methods in the code were reviewed, and new methods were developed that further improved efficiency and reduced overall discretization error. It was also shown that the remaining discretization error could be attributed to low energy light ions (A < 4) with residual ranges smaller than the physical step-size taken by the code. Accurately resolving the spectrum of low energy light particles is important in assessing risk associated with astronaut radiation exposure.more » In this work, modifications to the light particle transport formalism are presented that accurately resolve the spectrum of low energy light ion target fragments. The modified formalism is shown to significantly reduce overall discretization error and allows a physical approximation to be removed. For typical step-sizes and energy grids used in HZETRN, discretization errors for the revised light particle transport algorithms are shown to be less than 4% for aluminum and water shielding thicknesses as large as 100 g/cm{sup 2} exposed to both solar particle event and galactic cosmic ray environments.« less

  20. National Strategies for Developing Human Resources through Technical and Vocational Education and Training. The 2001 KRIVET International Conference on Technical and Vocational Education and Training [Proceedings] (Seoul, South Korea, November 21-23, 2001).

    ERIC Educational Resources Information Center

    Korea Research Inst. for Vocational Education and Training, Seoul.

    This document contains 19 papers and case studies, in English and Korean, from a conference on national strategies for developing human resources through technical and vocational education and training. The following are representative: "The Need to Innovate and Optimize Resources [Keynote]" (Wataru Iwamoto); "School to Work…

  1. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    NASA Astrophysics Data System (ADS)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  2. Error and Error Mitigation in Low-Coverage Genome Assemblies

    PubMed Central

    Hubisz, Melissa J.; Lin, Michael F.; Kellis, Manolis; Siepel, Adam

    2011-01-01

    The recent release of twenty-two new genome sequences has dramatically increased the data available for mammalian comparative genomics, but twenty of these new sequences are currently limited to ∼2× coverage. Here we examine the extent of sequencing error in these 2× assemblies, and its potential impact in downstream analyses. By comparing 2× assemblies with high-quality sequences from the ENCODE regions, we estimate the rate of sequencing error to be 1–4 errors per kilobase. While this error rate is fairly modest, sequencing error can still have surprising effects. For example, an apparent lineage-specific insertion in a coding region is more likely to reflect sequencing error than a true biological event, and the length distribution of coding indels is strongly distorted by error. We find that most errors are contributed by a small fraction of bases with low quality scores, in particular, by the ends of reads in regions of single-read coverage in the assembly. We explore several approaches for automatic sequencing error mitigation (SEM), making use of the localized nature of sequencing error, the fact that it is well predicted by quality scores, and information about errors that comes from comparisons across species. Our automatic methods for error mitigation cannot replace the need for additional sequencing, but they do allow substantial fractions of errors to be masked or eliminated at the cost of modest amounts of over-correction, and they can reduce the impact of error in downstream phylogenomic analyses. Our error-mitigated alignments are available for download. PMID:21340033

  3. Latent error detection: A golden two hours for detection.

    PubMed

    Saward, Justin R E; Stanton, Neville A

    2017-03-01

    Undetected error in safety critical contexts generates a latent condition that can contribute to a future safety failure. The detection of latent errors post-task completion is observed in naval air engineers using a diary to record work-related latent error detection (LED) events. A systems view is combined with multi-process theories to explore sociotechnical factors associated with LED. Perception of cues in different environments facilitates successful LED, for which the deliberate review of past tasks within two hours of the error occurring and whilst remaining in the same or similar sociotechnical environment to that which the error occurred appears most effective. Identified ergonomic interventions offer potential mitigation for latent errors; particularly in simple everyday habitual tasks. It is thought safety critical organisations should look to engineer further resilience through the application of LED techniques that engage with system cues across the entire sociotechnical environment, rather than relying on consistent human performance. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  4. Blood transfusion sampling and a greater role for error recovery.

    PubMed

    Oldham, Jane

    Patient identification errors in pre-transfusion blood sampling ('wrong blood in tube') are a persistent area of risk. These errors can potentially result in life-threatening complications. Current measures to address root causes of incidents and near misses have not resolved this problem and there is a need to look afresh at this issue. PROJECT PURPOSE: This narrative review of the literature is part of a wider system-improvement project designed to explore and seek a better understanding of the factors that contribute to transfusion sampling error as a prerequisite to examining current and potential approaches to error reduction. A broad search of the literature was undertaken to identify themes relating to this phenomenon. KEY DISCOVERIES: Two key themes emerged from the literature. Firstly, despite multi-faceted causes of error, the consistent element is the ever-present potential for human error. Secondly, current focus on error prevention could potentially be augmented with greater attention to error recovery. Exploring ways in which clinical staff taking samples might learn how to better identify their own errors is proposed to add to current safety initiatives.

  5. [Improving blood safety: errors management in transfusion medicine].

    PubMed

    Bujandrić, Nevenka; Grujić, Jasmina; Krga-Milanović, Mirjana

    2014-01-01

    The concept of blood safety includes the entire transfusion chain starting with the collection of blood from the blood donor, and ending with blood transfusion to the patient. The concept involves quality management system as the systematic monitoring of adverse reactions and incidents regarding the blood donor or patient. Monitoring of near-miss errors show the critical points in the working process and increase transfusion safety. The aim of the study was to present the analysis results of adverse and unexpected events in transfusion practice with a potential risk to the health of blood donors and patients. One-year retrospective study was based on the collection, analysis and interpretation of written reports on medical errors in the Blood Transfusion Institute of Vojvodina. Errors were distributed according to the type, frequency and part of the working process where they occurred. Possible causes and corrective actions were described for each error. The study showed that there were not errors with potential health consequences for the blood donor/patient. Errors with potentially damaging consequences for patients were detected throughout the entire transfusion chain. Most of the errors were identified in the preanalytical phase. The human factor was responsible for the largest number of errors. Error reporting system has an important role in the error management and the reduction of transfusion-related risk of adverse events and incidents. The ongoing analysis reveals the strengths and weaknesses of the entire process and indicates the necessary changes. Errors in transfusion medicine can be avoided in a large percentage and prevention is cost-effective, systematic and applicable.

  6. Reducing diagnostic errors in medicine: what's the goal?

    PubMed

    Graber, Mark; Gordon, Ruthanna; Franklin, Nancy

    2002-10-01

    This review considers the feasibility of reducing or eliminating the three major categories of diagnostic errors in medicine: "No-fault errors" occur when the disease is silent, presents atypically, or mimics something more common. These errors will inevitably decline as medical science advances, new syndromes are identified, and diseases can be detected more accurately or at earlier stages. These errors can never be eradicated, unfortunately, because new diseases emerge, tests are never perfect, patients are sometimes noncompliant, and physicians will inevitably, at times, choose the most likely diagnosis over the correct one, illustrating the concept of necessary fallibility and the probabilistic nature of choosing a diagnosis. "System errors" play a role when diagnosis is delayed or missed because of latent imperfections in the health care system. These errors can be reduced by system improvements, but can never be eliminated because these improvements lag behind and degrade over time, and each new fix creates the opportunity for novel errors. Tradeoffs also guarantee system errors will persist, when resources are just shifted. "Cognitive errors" reflect misdiagnosis from faulty data collection or interpretation, flawed reasoning, or incomplete knowledge. The limitations of human processing and the inherent biases in using heuristics guarantee that these errors will persist. Opportunities exist, however, for improving the cognitive aspect of diagnosis by adopting system-level changes (e.g., second opinions, decision-support systems, enhanced access to specialists) and by training designed to improve cognition or cognitive awareness. Diagnostic error can be substantially reduced, but never eradicated.

  7. Unforced errors and error reduction in tennis

    PubMed Central

    Brody, H

    2006-01-01

    Only at the highest level of tennis is the number of winners comparable to the number of unforced errors. As the average player loses many more points due to unforced errors than due to winners by an opponent, if the rate of unforced errors can be reduced, it should lead to an increase in points won. This article shows how players can improve their game by understanding and applying the laws of physics to reduce the number of unforced errors. PMID:16632568

  8. "First, know thyself": cognition and error in medicine.

    PubMed

    Elia, Fabrizio; Aprà, Franco; Verhovez, Andrea; Crupi, Vincenzo

    2016-04-01

    Although error is an integral part of the world of medicine, physicians have always been little inclined to take into account their own mistakes and the extraordinary technological progress observed in the last decades does not seem to have resulted in a significant reduction in the percentage of diagnostic errors. The failure in the reduction in diagnostic errors, notwithstanding the considerable investment in human and economic resources, has paved the way to new strategies which were made available by the development of cognitive psychology, the branch of psychology that aims at understanding the mechanisms of human reasoning. This new approach led us to realize that we are not fully rational agents able to take decisions on the basis of logical and probabilistically appropriate evaluations. In us, two different and mostly independent modes of reasoning coexist: a fast or non-analytical reasoning, which tends to be largely automatic and fast-reactive, and a slow or analytical reasoning, which permits to give rationally founded answers. One of the features of the fast mode of reasoning is the employment of standardized rules, termed "heuristics." Heuristics lead physicians to correct choices in a large percentage of cases. Unfortunately, cases exist wherein the heuristic triggered fails to fit the target problem, so that the fast mode of reasoning can lead us to unreflectively perform actions exposing us and others to variable degrees of risk. Cognitive errors arise as a result of these cases. Our review illustrates how cognitive errors can cause diagnostic problems in clinical practice.

  9. What Happened, and Why: Toward an Understanding of Human Error Based on Automated Analyses of Incident Reports. Volume 1

    NASA Technical Reports Server (NTRS)

    Maille, Nicolas P.; Statler, Irving C.; Ferryman, Thomas A.; Rosenthal, Loren; Shafto, Michael G.; Statler, Irving C.

    2006-01-01

    The objective of the Aviation System Monitoring and Modeling (ASMM) project of NASA s Aviation Safety and Security Program was to develop technologies that will enable proactive management of safety risk, which entails identifying the precursor events and conditions that foreshadow most accidents. This presents a particular challenge in the aviation system where people are key components and human error is frequently cited as a major contributing factor or cause of incidents and accidents. In the aviation "world", information about what happened can be extracted from quantitative data sources, but the experiential account of the incident reporter is the best available source of information about why an incident happened. This report describes a conceptual model and an approach to automated analyses of textual data sources for the subjective perspective of the reporter of the incident to aid in understanding why an incident occurred. It explores a first-generation process for routinely searching large databases of textual reports of aviation incident or accidents, and reliably analyzing them for causal factors of human behavior (the why of an incident). We have defined a generic structure of information that is postulated to be a sound basis for defining similarities between aviation incidents. Based on this structure, we have introduced the simplifying structure, which we call the Scenario as a pragmatic guide for identifying similarities of what happened based on the objective parameters that define the Context and the Outcome of a Scenario. We believe that it will be possible to design an automated analysis process guided by the structure of the Scenario that will aid aviation-safety experts to understand the systemic issues that are conducive to human error.

  10. [Technical guideline for human rabies prevention and control (2016)].

    PubMed

    Zhou, H; Li, Y; Chen, R F; Tao, X Y; Yu, P C; Cao, S C; Li, L; Chen, Z H; Zhu, W Y; Yin, W W; Li, Y H; Wang, C L; Yu, H J

    2016-02-01

    In order to promote the prevention and control programs on rabies in our country, to regulate the prevention and disposition of rabies and to reduce the deaths caused by rabies, the Chinese Center for Disease Control and Prevention has organized a panel of experts, in the reference with Guidelines issued by WHO, American Advisory Committee on Immunization Practices, and the latest research progress from home and abroad, and compiled this document-"Technical Guidelines for Human Rabies Prevention and Control (2016)". The Guidelines conducted a systematic review on the etiology, clinical characteristics, laboratory diagnosis, epidemiology of rabies and provided evidence on varieties, mechanisms, effects, side-effects and security of rabies vaccine, as well as on other preparations on passive immunity of its kind, on methods related to prevention and disposition of exposure etc, finally to have come up with the recommendation on the above mentioned various techniques. The guidelines will be used by staff working on prevention and control of rabies from the Center for Disease Control and Prevention at all levels, from the departments of outpatient and divisions of infection and emergency control in all the medical institutions. The guideline will be updated and revised, following the research progress from home and abroad.

  11. Intermittently-visual Tracking Experiments Reveal the Roles of Error-correction and Predictive Mechanisms in the Human Visual-motor Control System

    NASA Astrophysics Data System (ADS)

    Hayashi, Yoshikatsu; Tamura, Yurie; Sase, Kazuya; Sugawara, Ken; Sawada, Yasuji

    Prediction mechanism is necessary for human visual motion to compensate a delay of sensory-motor system. In a previous study, “proactive control” was discussed as one example of predictive function of human beings, in which motion of hands preceded the virtual moving target in visual tracking experiments. To study the roles of the positional-error correction mechanism and the prediction mechanism, we carried out an intermittently-visual tracking experiment where a circular orbit is segmented into the target-visible regions and the target-invisible regions. Main results found in this research were following. A rhythmic component appeared in the tracer velocity when the target velocity was relatively high. The period of the rhythm in the brain obtained from environmental stimuli is shortened more than 10%. The shortening of the period of rhythm in the brain accelerates the hand motion as soon as the visual information is cut-off, and causes the precedence of hand motion to the target motion. Although the precedence of the hand in the blind region is reset by the environmental information when the target enters the visible region, the hand motion precedes the target in average when the predictive mechanism dominates the error-corrective mechanism.

  12. An Analysis of Oral Reading Errors and Its Implications for Improvement of Reading Instruction. Technical Report No. 50.

    ERIC Educational Resources Information Center

    Au, Kathryn H.

    The oral reading errors of 15 second graders were analyzed to find out if strategies used by good and poor readers could be differentiated. Patterns of errors were identified, and it was found that good readers often used context cues, while poor readers relied heavily on visual-phonic information. It was also possible to identify good and poor…

  13. Mental representation of symbols as revealed by vocabulary errors in two bonobos (Pan paniscus).

    PubMed

    Lyn, Heidi

    2007-10-01

    Error analysis has been used in humans to detect implicit representations and categories in language use. The present study utilizes the same technique to report on mental representations and categories in symbol use from two bonobos (Pan paniscus). These bonobos have been shown in published reports to comprehend English at the level of a two-and-a-half year old child and to use a keyboard with over 200 visuographic symbols (lexigrams). In this study, vocabulary test errors from over 10 years of data revealed auditory, visual, and spatio-temporal generalizations (errors were more likely items that looked like sounded like, or were frequently associated with the sample item in space or in time), as well as hierarchical and conceptual categorizations. These error data, like those of humans, are a result of spontaneous responding rather than specific training and do not solely depend upon the sample mode (e.g. auditory similarity errors are not universally more frequent with an English sample, nor were visual similarity errors universally more frequent with a photograph sample). However, unlike humans, these bonobos do not make errors based on syntactical confusions (e.g. confusing semantically unrelated nouns), suggesting that they may not separate syntactical and semantic information. These data suggest that apes spontaneously create a complex, hierarchical, web of representations when exposed to a symbol system.

  14. The spontaneous replication error and the mismatch discrimination mechanisms of human DNA polymerase β

    PubMed Central

    Koag, Myong-Chul; Nam, Kwangho; Lee, Seongmin

    2014-01-01

    To provide molecular-level insights into the spontaneous replication error and the mismatch discrimination mechanisms of human DNA polymerase β (polβ), we report four crystal structures of polβ complexed with dG•dTTP and dA•dCTP mismatches in the presence of Mg2+ or Mn2+. The Mg2+-bound ground-state structures show that the dA•dCTP-Mg2+ complex adopts an ‘intermediate’ protein conformation while the dG•dTTP-Mg2+ complex adopts an open protein conformation. The Mn2+-bound ‘pre-chemistry-state’ structures show that the dA•dCTP-Mn2+ complex is structurally very similar to the dA•dCTP-Mg2+ complex, whereas the dG•dTTP-Mn2+ complex undergoes a large-scale conformational change to adopt a Watson–Crick-like dG•dTTP base pair and a closed protein conformation. These structural differences, together with our molecular dynamics simulation studies, suggest that polβ increases replication fidelity via a two-stage mismatch discrimination mechanism, where one is in the ground state and the other in the closed conformation state. In the closed conformation state, polβ appears to allow only a Watson–Crick-like conformation for purine•pyrimidine base pairs, thereby discriminating the mismatched base pairs based on their ability to form the Watson–Crick-like conformation. Overall, the present studies provide new insights into the spontaneous replication error and the replication fidelity mechanisms of polβ. PMID:25200079

  15. Error rates in forensic DNA analysis: definition, numbers, impact and communication.

    PubMed

    Kloosterman, Ate; Sjerps, Marjan; Quak, Astrid

    2014-09-01

    Forensic DNA casework is currently regarded as one of the most important types of forensic evidence, and important decisions in intelligence and justice are based on it. However, errors occasionally occur and may have very serious consequences. In other domains, error rates have been defined and published. The forensic domain is lagging behind concerning this transparency for various reasons. In this paper we provide definitions and observed frequencies for different types of errors at the Human Biological Traces Department of the Netherlands Forensic Institute (NFI) over the years 2008-2012. Furthermore, we assess their actual and potential impact and describe how the NFI deals with the communication of these numbers to the legal justice system. We conclude that the observed relative frequency of quality failures is comparable to studies from clinical laboratories and genetic testing centres. Furthermore, this frequency is constant over the five-year study period. The most common causes of failures related to the laboratory process were contamination and human error. Most human errors could be corrected, whereas gross contamination in crime samples often resulted in irreversible consequences. Hence this type of contamination is identified as the most significant source of error. Of the known contamination incidents, most were detected by the NFI quality control system before the report was issued to the authorities, and thus did not lead to flawed decisions like false convictions. However in a very limited number of cases crucial errors were detected after the report was issued, sometimes with severe consequences. Many of these errors were made in the post-analytical phase. The error rates reported in this paper are useful for quality improvement and benchmarking, and contribute to an open research culture that promotes public trust. However, they are irrelevant in the context of a particular case. Here case-specific probabilities of undetected errors are needed

  16. Implementation errors in the GingerALE Software: Description and recommendations.

    PubMed

    Eickhoff, Simon B; Laird, Angela R; Fox, P Mickle; Lancaster, Jack L; Fox, Peter T

    2017-01-01

    Neuroscience imaging is a burgeoning, highly sophisticated field the growth of which has been fostered by grant-funded, freely distributed software libraries that perform voxel-wise analyses in anatomically standardized three-dimensional space on multi-subject, whole-brain, primary datasets. Despite the ongoing advances made using these non-commercial computational tools, the replicability of individual studies is an acknowledged limitation. Coordinate-based meta-analysis offers a practical solution to this limitation and, consequently, plays an important role in filtering and consolidating the enormous corpus of functional and structural neuroimaging results reported in the peer-reviewed literature. In both primary data and meta-analytic neuroimaging analyses, correction for multiple comparisons is a complex but critical step for ensuring statistical rigor. Reports of errors in multiple-comparison corrections in primary-data analyses have recently appeared. Here, we report two such errors in GingerALE, a widely used, US National Institutes of Health (NIH)-funded, freely distributed software package for coordinate-based meta-analysis. These errors have given rise to published reports with more liberal statistical inferences than were specified by the authors. The intent of this technical report is threefold. First, we inform authors who used GingerALE of these errors so that they can take appropriate actions including re-analyses and corrective publications. Second, we seek to exemplify and promote an open approach to error management. Third, we discuss the implications of these and similar errors in a scientific environment dependent on third-party software. Hum Brain Mapp 38:7-11, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  17. Updating expected action outcome in the medial frontal cortex involves an evaluation of error type.

    PubMed

    Maier, Martin E; Steinhauser, Marco

    2013-10-02

    Forming expectations about the outcome of an action is an important prerequisite for action control and reinforcement learning in the human brain. The medial frontal cortex (MFC) has been shown to play an important role in the representation of outcome expectations, particularly when an update of expected outcome becomes necessary because an error is detected. However, error detection alone is not always sufficient to compute expected outcome because errors can occur in various ways and different types of errors may be associated with different outcomes. In the present study, we therefore investigate whether updating expected outcome in the human MFC is based on an evaluation of error type. Our approach was to consider an electrophysiological correlate of MFC activity on errors, the error-related negativity (Ne/ERN), in a task in which two types of errors could occur. Because the two error types were associated with different amounts of monetary loss, updating expected outcomes on error trials required an evaluation of error type. Our data revealed a pattern of Ne/ERN amplitudes that closely mirrored the amount of monetary loss associated with each error type, suggesting that outcome expectations are updated based on an evaluation of error type. We propose that this is achieved by a proactive evaluation process that anticipates error types by continuously monitoring error sources or by dynamically representing possible response-outcome relations.

  18. Neural markers of errors as endophenotypes in neuropsychiatric disorders

    PubMed Central

    Manoach, Dara S.; Agam, Yigal

    2013-01-01

    Learning from errors is fundamental to adaptive human behavior. It requires detecting errors, evaluating what went wrong, and adjusting behavior accordingly. These dynamic adjustments are at the heart of behavioral flexibility and accumulating evidence suggests that deficient error processing contributes to maladaptively rigid and repetitive behavior in a range of neuropsychiatric disorders. Neuroimaging and electrophysiological studies reveal highly reliable neural markers of error processing. In this review, we evaluate the evidence that abnormalities in these neural markers can serve as sensitive endophenotypes of neuropsychiatric disorders. We describe the behavioral and neural hallmarks of error processing, their mediation by common genetic polymorphisms, and impairments in schizophrenia, obsessive-compulsive disorder, and autism spectrum disorders. We conclude that neural markers of errors meet several important criteria as endophenotypes including heritability, established neuroanatomical and neurochemical substrates, association with neuropsychiatric disorders, presence in syndromally-unaffected family members, and evidence of genetic mediation. Understanding the mechanisms of error processing deficits in neuropsychiatric disorders may provide novel neural and behavioral targets for treatment and sensitive surrogate markers of treatment response. Treating error processing deficits may improve functional outcome since error signals provide crucial information for flexible adaptation to changing environments. Given the dearth of effective interventions for cognitive deficits in neuropsychiatric disorders, this represents a potentially promising approach. PMID:23882201

  19. An error taxonomy system for analysis of haemodialysis incidents.

    PubMed

    Gu, Xiuzhu; Itoh, Kenji; Suzuki, Satoshi

    2014-12-01

    This paper describes the development of a haemodialysis error taxonomy system for analysing incidents and predicting the safety status of a dialysis organisation. The error taxonomy system was developed by adapting an error taxonomy system which assumed no specific specialty to haemodialysis situations. Its application was conducted with 1,909 incident reports collected from two dialysis facilities in Japan. Over 70% of haemodialysis incidents were reported as problems or complications related to dialyser, circuit, medication and setting of dialysis condition. Approximately 70% of errors took place immediately before and after the four hours of haemodialysis therapy. Error types most frequently made in the dialysis unit were omission and qualitative errors. Failures or complications classified to staff human factors, communication, task and organisational factors were found in most dialysis incidents. Device/equipment/materials, medicine and clinical documents were most likely to be involved in errors. Haemodialysis nurses were involved in more incidents related to medicine and documents, whereas dialysis technologists made more errors with device/equipment/materials. This error taxonomy system is able to investigate incidents and adverse events occurring in the dialysis setting but is also able to estimate safety-related status of an organisation, such as reporting culture. © 2014 European Dialysis and Transplant Nurses Association/European Renal Care Association.

  20. Neural markers of errors as endophenotypes in neuropsychiatric disorders.

    PubMed

    Manoach, Dara S; Agam, Yigal

    2013-01-01

    Learning from errors is fundamental to adaptive human behavior. It requires detecting errors, evaluating what went wrong, and adjusting behavior accordingly. These dynamic adjustments are at the heart of behavioral flexibility and accumulating evidence suggests that deficient error processing contributes to maladaptively rigid and repetitive behavior in a range of neuropsychiatric disorders. Neuroimaging and electrophysiological studies reveal highly reliable neural markers of error processing. In this review, we evaluate the evidence that abnormalities in these neural markers can serve as sensitive endophenotypes of neuropsychiatric disorders. We describe the behavioral and neural hallmarks of error processing, their mediation by common genetic polymorphisms, and impairments in schizophrenia, obsessive-compulsive disorder, and autism spectrum disorders. We conclude that neural markers of errors meet several important criteria as endophenotypes including heritability, established neuroanatomical and neurochemical substrates, association with neuropsychiatric disorders, presence in syndromally-unaffected family members, and evidence of genetic mediation. Understanding the mechanisms of error processing deficits in neuropsychiatric disorders may provide novel neural and behavioral targets for treatment and sensitive surrogate markers of treatment response. Treating error processing deficits may improve functional outcome since error signals provide crucial information for flexible adaptation to changing environments. Given the dearth of effective interventions for cognitive deficits in neuropsychiatric disorders, this represents a potentially promising approach.

  1. Analysis of the "naming game" with learning errors in communications.

    PubMed

    Lou, Yang; Chen, Guanrong

    2015-07-16

    Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.

  2. technical Hexachlorocyclohexane (t-HCH)

    Integrated Risk Information System (IRIS)

    technical Hexachlorocyclohexane ( t - HCH ) ; CASRN 608 - 73 - 1 Human health assessment information on a chemical substance is included in the IRIS database only after a comprehensive review of toxicity data , as outlined in the IRIS assessment development process . Sections I ( Health Hazard Asses

  3. Prevention of medication errors: detection and audit.

    PubMed

    Montesi, Germana; Lechi, Alessandro

    2009-06-01

    1. Medication errors have important implications for patient safety, and their identification is a main target in improving clinical practice errors, in order to prevent adverse events. 2. Error detection is the first crucial step. Approaches to this are likely to be different in research and routine care, and the most suitable must be chosen according to the setting. 3. The major methods for detecting medication errors and associated adverse drug-related events are chart review, computerized monitoring, administrative databases, and claims data, using direct observation, incident reporting, and patient monitoring. All of these methods have both advantages and limitations. 4. Reporting discloses medication errors, can trigger warnings, and encourages the diffusion of a culture of safe practice. Combining and comparing data from various and encourages the diffusion of a culture of safe practice sources increases the reliability of the system. 5. Error prevention can be planned by means of retroactive and proactive tools, such as audit and Failure Mode, Effect, and Criticality Analysis (FMECA). Audit is also an educational activity, which promotes high-quality care; it should be carried out regularly. In an audit cycle we can compare what is actually done against reference standards and put in place corrective actions to improve the performances of individuals and systems. 6. Patient safety must be the first aim in every setting, in order to build safer systems, learning from errors and reducing the human and fiscal costs.

  4. [From the concept of guilt to the value-free notification of errors in medicine. Risks, errors and patient safety].

    PubMed

    Haller, U; Welti, S; Haenggi, D; Fink, D

    2005-06-01

    The number of liability cases but also the size of individual claims due to alleged treatment errors are increasing steadily. Spectacular sentences, especially in the USA, encourage this trend. Wherever human beings work, errors happen. The health care system is particularly susceptible and shows a high potential for errors. Therefore risk management has to be given top priority in hospitals. Preparing the introduction of critical incident reporting (CIR) as the means to notify errors is time-consuming and calls for a change in attitude because in many places the necessary base of trust has to be created first. CIR is not made to find the guilty and punish them but to uncover the origins of errors in order to eliminate them. The Department of Anesthesiology of the University Hospital of Basel has developed an electronic error notification system, which, in collaboration with the Swiss Medical Association, allows each specialist society to participate electronically in a CIR system (CIRS) in order to create the largest database possible and thereby to allow statements concerning the extent and type of error sources in medicine. After a pilot project in 2000-2004, the Swiss Society of Gynecology and Obstetrics is now progressively introducing the 'CIRS Medical' of the Swiss Medical Association. In our country, such programs are vulnerable to judicial intervention due to the lack of explicit legal guarantees of protection. High-quality data registration and skillful counseling are all the more important. Hospital directors and managers are called upon to examine those incidents which are based on errors inherent in the system.

  5. Obstetric Neuraxial Drug Administration Errors: A Quantitative and Qualitative Analytical Review.

    PubMed

    Patel, Santosh; Loveridge, Robert

    2015-12-01

    Drug administration errors in obstetric neuraxial anesthesia can have devastating consequences. Although fully recognizing that they represent "only the tip of the iceberg," published case reports/series of these errors were reviewed in detail with the aim of estimating the frequency and the nature of these errors. We identified case reports and case series from MEDLINE and performed a quantitative analysis of the involved drugs, error setting, source of error, the observed complications, and any therapeutic interventions. We subsequently performed a qualitative analysis of the human factors involved and proposed modifications to practice. Twenty-nine cases were identified. Various drugs were given in error, but no direct effects on the course of labor, mode of delivery, or neonatal outcome were reported. Four maternal deaths from the accidental intrathecal administration of tranexamic acid were reported, all occurring after delivery of the fetus. A range of hemodynamic and neurologic signs and symptoms were noted, but the most commonly reported complication was the failure of the intended neuraxial anesthetic technique. Several human factors were present; most common factors were drug storage issues and similar drug appearance. Four practice recommendations were identified as being likely to have prevented the errors. The reported errors exposed latent conditions within health care systems. We suggest that the implementation of the following processes may decrease the risk of these types of drug errors: (1) Careful reading of the label on any drug ampule or syringe before the drug is drawn up or injected; (2) labeling all syringes; (3) checking labels with a second person or a device (such as a barcode reader linked to a computer) before the drug is drawn up or administered; and (4) use of non-Luer lock connectors on all epidural/spinal/combined spinal-epidural devices. Further study is required to determine whether routine use of these processes will reduce drug

  6. A method and technical equipment for an acute human trial to evaluate retinal implant technology

    NASA Astrophysics Data System (ADS)

    Hornig, Ralf; Laube, Thomas; Walter, Peter; Velikay-Parel, Michaela; Bornfeld, Norbert; Feucht, Matthias; Akguel, Harun; Rössler, Gernot; Alteheld, Nils; Lütke Notarp, Dietmar; Wyatt, John; Richard, Gisbert

    2005-03-01

    This paper reports on methods and technical equipment to investigate the epiretinal stimulation of the retina in blind human subjects in acute trials. Current is applied to the retina through a thin, flexible microcontact film (microelectrode array) with electrode diameters ranging from 50 to 360 µm. The film is mounted in a custom-designed surgical tool that is hand-held by the surgeon during stimulation. The eventual goal of the work is the development of a chronically implantable retinal prosthesis to restore a useful level of vision to patients who are blind with outer retinal degenerations, specifically retinitis pigmentosa and macular degeneration.

  7. Research in Automatic Russian-English Scientific and Technical Lexicography. Final Report.

    ERIC Educational Resources Information Center

    Wayne State Univ., Detroit, MI.

    Techniques of reversing English-Russian scientific and technical dictionaries into Russian-English versions through semi-automated compilation are described. Sections on manual and automatic processing discuss pre- and post-editing, the task program, updater (correction of errors and revision by specialist in a given field), the system employed…

  8. Human error and crew resource management failures in Naval aviation mishaps: a review of U.S. Naval Safety Center data, 1990-96.

    PubMed

    Wiegmann, D A; Shappell, S A

    1999-12-01

    The present study examined the role of human error and crew-resource management (CRM) failures in U.S. Naval aviation mishaps. All tactical jet (TACAIR) and rotary wing Class A flight mishaps between fiscal years 1990-1996 were reviewed. Results indicated that over 75% of both TACAIR and rotary wing mishaps were attributable, at least in part, to some form of human error of which 70% were associated with aircrew human factors. Of these aircrew-related mishaps, approximately 56% involved at least one CRM failure. These percentages are very similar to those observed prior to the implementation of aircrew coordination training (ACT) in the fleet, suggesting that the initial benefits of the program have not persisted and that CRM failures continue to plague Naval aviation. Closer examination of these CRM-related mishaps suggest that the type of flight operations (preflight, routine, emergency) do play a role in the etiology of CRM failures. A larger percentage of CRM failures occurred during non-routine or extremis flight situations when TACAIR mishaps were considered. In contrast, a larger percentage of rotary wing CRM mishaps involved failures that occurred during routine flight operations. These findings illustrate the complex etiology of CRM failures within Naval aviation and support the need for ACT programs tailored to the unique problems faced by specific communities in the fleet.

  9. Denoising DNA deep sequencing data—high-throughput sequencing errors and their correction

    PubMed Central

    Laehnemann, David; Borkhardt, Arndt

    2016-01-01

    Characterizing the errors generated by common high-throughput sequencing platforms and telling true genetic variation from technical artefacts are two interdependent steps, essential to many analyses such as single nucleotide variant calling, haplotype inference, sequence assembly and evolutionary studies. Both random and systematic errors can show a specific occurrence profile for each of the six prominent sequencing platforms surveyed here: 454 pyrosequencing, Complete Genomics DNA nanoball sequencing, Illumina sequencing by synthesis, Ion Torrent semiconductor sequencing, Pacific Biosciences single-molecule real-time sequencing and Oxford Nanopore sequencing. There is a large variety of programs available for error removal in sequencing read data, which differ in the error models and statistical techniques they use, the features of the data they analyse, the parameters they determine from them and the data structures and algorithms they use. We highlight the assumptions they make and for which data types these hold, providing guidance which tools to consider for benchmarking with regard to the data properties. While no benchmarking results are included here, such specific benchmarks would greatly inform tool choices and future software development. The development of stand-alone error correctors, as well as single nucleotide variant and haplotype callers, could also benefit from using more of the knowledge about error profiles and from (re)combining ideas from the existing approaches presented here. PMID:26026159

  10. Encoder fault analysis system based on Moire fringe error signal

    NASA Astrophysics Data System (ADS)

    Gao, Xu; Chen, Wei; Wan, Qiu-hua; Lu, Xin-ran; Xie, Chun-yu

    2018-02-01

    Aiming at the problem of any fault and wrong code in the practical application of photoelectric shaft encoder, a fast and accurate encoder fault analysis system is researched from the aspect of Moire fringe photoelectric signal processing. DSP28335 is selected as the core processor and high speed serial A/D converter acquisition card is used. And temperature measuring circuit using AD7420 is designed. Discrete data of Moire fringe error signal is collected at different temperatures and it is sent to the host computer through wireless transmission. The error signal quality index and fault type is displayed on the host computer based on the error signal identification method. The error signal quality can be used to diagnosis the state of error code through the human-machine interface.

  11. Transfer Error and Correction Approach in Mobile Network

    NASA Astrophysics Data System (ADS)

    Xiao-kai, Wu; Yong-jin, Shi; Da-jin, Chen; Bing-he, Ma; Qi-li, Zhou

    With the development of information technology and social progress, human demand for information has become increasingly diverse, wherever and whenever people want to be able to easily, quickly and flexibly via voice, data, images and video and other means to communicate. Visual information to the people direct and vivid image, image / video transmission also been widespread attention. Although the third generation mobile communication systems and the emergence and rapid development of IP networks, making video communications is becoming the main business of the wireless communications, however, the actual wireless and IP channel will lead to error generation, such as: wireless channel multi- fading channels generated error and blocking IP packet loss and so on. Due to channel bandwidth limitations, the video communication compression coding of data is often beyond the data, and compress data after the error is very sensitive to error conditions caused a serious decline in image quality.

  12. Determining the Numeracy and Algebra Errors of Students in a Two-Year Vocational School

    ERIC Educational Resources Information Center

    Akyüz, Gözde

    2015-01-01

    The goal of this study was to determine the mathematics achievement level in basic numeracy and algebra concepts of students in a two-year program in a technical vocational school of higher education and determine the errors that they make in these topics. The researcher developed a diagnostic mathematics achievement test related to numeracy and…

  13. Skills, rules and knowledge in aircraft maintenance: errors in context

    NASA Technical Reports Server (NTRS)

    Hobbs, Alan; Williamson, Ann

    2002-01-01

    Automatic or skill-based behaviour is generally considered to be less prone to error than behaviour directed by conscious control. However, researchers who have applied Rasmussen's skill-rule-knowledge human error framework to accidents and incidents have sometimes found that skill-based errors appear in significant numbers. It is proposed that this is largely a reflection of the opportunities for error which workplaces present and does not indicate that skill-based behaviour is intrinsically unreliable. In the current study, 99 errors reported by 72 aircraft mechanics were examined in the light of a task analysis based on observations of the work of 25 aircraft mechanics. The task analysis identified the opportunities for error presented at various stages of maintenance work packages and by the job as a whole. Once the frequency of each error type was normalized in terms of the opportunities for error, it became apparent that skill-based performance is more reliable than rule-based performance, which is in turn more reliable than knowledge-based performance. The results reinforce the belief that industrial safety interventions designed to reduce errors would best be directed at those aspects of jobs that involve rule- and knowledge-based performance.

  14. Covariate Measurement Error Correction Methods in Mediation Analysis with Failure Time Data

    PubMed Central

    Zhao, Shanshan

    2014-01-01

    Summary Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This paper focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error and error associated with temporal variation. The underlying model with the ‘true’ mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling design. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. PMID:25139469

  15. Covariate measurement error correction methods in mediation analysis with failure time data.

    PubMed

    Zhao, Shanshan; Prentice, Ross L

    2014-12-01

    Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.

  16. Challenge and Error: Critical Events and Attention-Related Errors

    ERIC Educational Resources Information Center

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  17. Understanding diagnostic errors in medicine: a lesson from aviation

    PubMed Central

    Singh, H; Petersen, L A; Thomas, E J

    2006-01-01

    The impact of diagnostic errors on patient safety in medicine is increasingly being recognized. Despite the current progress in patient safety research, the understanding of such errors and how to prevent them is inadequate. Preliminary research suggests that diagnostic errors have both cognitive and systems origins. Situational awareness is a model that is primarily used in aviation human factors research that can encompass both the cognitive and the systems roots of such errors. This conceptual model offers a unique perspective in the study of diagnostic errors. The applicability of this model is illustrated by the analysis of a patient whose diagnosis of spinal cord compression was substantially delayed. We suggest how the application of this framework could lead to potential areas of intervention and outline some areas of future research. It is possible that the use of such a model in medicine could help reduce errors in diagnosis and lead to significant improvements in patient care. Further research is needed, including the measurement of situational awareness and correlation with health outcomes. PMID:16751463

  18. Emmetropisation and the aetiology of refractive errors

    PubMed Central

    Flitcroft, D I

    2014-01-01

    The distribution of human refractive errors displays features that are not commonly seen in other biological variables. Compared with the more typical Gaussian distribution, adult refraction within a population typically has a negative skew and increased kurtosis (ie is leptokurtotic). This distribution arises from two apparently conflicting tendencies, first, the existence of a mechanism to control eye growth during infancy so as to bring refraction towards emmetropia/low hyperopia (ie emmetropisation) and second, the tendency of many human populations to develop myopia during later childhood and into adulthood. The distribution of refraction therefore changes significantly with age. Analysis of the processes involved in shaping refractive development allows for the creation of a life course model of refractive development. Monte Carlo simulations based on such a model can recreate the variation of refractive distributions seen from birth to adulthood and the impact of increasing myopia prevalence on refractive error distributions in Asia. PMID:24406411

  19. A Human Factors Analysis of Technical and Team Skills Among Surgical Trainees During Procedural Simulations in a Simulated Operating Theatre

    PubMed Central

    Moorthy, Krishna; Munz, Yaron; Adams, Sally; Pandey, Vikas; Darzi, Ara

    2005-01-01

    Background: High-risk organizations such as aviation rely on simulations for the training and assessment of technical and team performance. The aim of this study was to develop a simulated environment for surgical trainees using similar principles. Methods: A total of 27 surgical trainees carried out a simulated procedure in a Simulated Operating Theatre with a standardized OR team. Observation of OR events was carried out by an unobtrusive data collection system: clinical data recorder. Assessment of performance consisted of blinded rating of technical skills, a checklist of technical events, an assessment of communication, and a global rating of team skills by a human factors expert and trained surgical research fellows. The participants underwent a debriefing session, and the face validity of the simulated environment was evaluated. Results: While technical skills rating discriminated between surgeons according to experience (P = 0.002), there were no differences in terms of the checklist and team skills (P = 0.70). While all trainees were observed to gown/glove and handle sharps correctly, low scores were observed for some key features of communication with other team members. Low scores were obtained by the entire cohort for vigilance. Interobserver reliability was 0.90 and 0.89 for technical and team skills ratings. Conclusions: The simulated operating theatre could serve as an environment for the development of surgical competence among surgical trainees. Objective, structured, and multimodal assessment of performance during simulated procedures could serve as a basis for focused feedback during training of technical and team skills. PMID:16244534

  20. Norm-Aware Socio-Technical Systems

    NASA Astrophysics Data System (ADS)

    Savarimuthu, Bastin Tony Roy; Ghose, Aditya

    The following sections are included: * Introduction * The Need for Norm-Aware Systems * Norms in human societies * Why should software systems be norm-aware? * Case Studies of Norm-Aware Socio-Technical Systems * Human-computer interactions * Virtual environments and multi-player online games * Extracting norms from big data and software repositories * Norms and Sustainability * Sustainability and green ICT * Norm awareness through software systems * Where To, From Here? * Conclusions

  1. The Chinese Visible Human (CVH) datasets incorporate technical and imaging advances on earlier digital humans

    PubMed Central

    Zhang, Shao-Xiang; Heng, Pheng-Ann; Liu, Zheng-Jin; Tan, Li-Wen; Qiu, Ming-Guo; Li, Qi-Yu; Liao, Rong-Xia; Li, Kai; Cui, Gao-Yu; Guo, Yan-Li; Yang, Xiao-Ping; Liu, Guang-Jiu; Shan, Jing-Lu; Liu, Ji-Jun; Zhang, Wei-Guo; Chen, Xian-Hong; Chen, Jin-Hua; Wang, Jian; Chen, Wei; Lu, Ming; You, Jian; Pang, Xue-Li; Xiao, Hong; Xie, Yong-Ming; Cheng, Jack Chun-Yiu

    2004-01-01

    We report the availability of a digitized Chinese male and a digitzed Chinese female typical of the population and with no obvious abnormalities. The embalming and milling procedures incorporate three technical improvements over earlier digitized cadavers. Vascular perfusion with coloured gelatin was performed to facilitate blood vessel identification. Embalmed cadavers were embedded in gelatin and cryosectioned whole so as to avoid section loss resulting from cutting the body into smaller pieces. Milling performed at −25 °C prevented small structures (e.g. teeth, concha nasalis and articular cartilage) from falling off from the milling surface. The male image set (.tiff images each of 36 Mb) has a section resolution of 3072 × 2048 pixels (∼170 μm, the accompanying magnetic resonance imaging and computer tomography data have a resolution of 512 × 512, i.e. ∼440 μm). The Chinese Visible Human male and female datasets are available at http://www.chinesevisiblehuman.com. (The male is 90.65 Gb and female 131.04 Gb). MPEG videos of direct records of real-time volume rendering are at: http://www.cse.cuhk.edu.hk/~crc PMID:15032906

  2. Nature and Nurture: the complex genetics of myopia and refractive error

    PubMed Central

    Wojciechowski, Robert

    2010-01-01

    The refractive errors, myopia and hyperopia, are optical defects of the visual system that can cause blurred vision. Uncorrected refractive errors are the most common causes of visual impairment worldwide. It is estimated that 2.5 billion people will be affected by myopia alone with in the next decade. Experimental, epidemiological and clinical research has shown that refractive development is influenced by both environmental and genetic factors. Animal models have demonstrated that eye growth and refractive maturation during infancy are tightly regulated by visually-guided mechanisms. Observational data in human populations provide compelling evidence that environmental influences and individual behavioral factors play crucial roles in myopia susceptibility. Nevertheless, the majority of the variance of refractive error within populations is thought to be due to hereditary factors. Genetic linkage studies have mapped two dozen loci, while association studies have implicated more than 25 different genes in refractive variation. Many of these genes are involved in common biological pathways known to mediate extracellular matrix composition and regulate connective tissue remodeling. Other associated genomic regions suggest novel mechanisms in the etiology of human myopia, such as mitochondrial-mediated cell death or photoreceptor-mediated visual signal transmission. Taken together, observational and experimental studies have revealed the complex nature of human refractive variation, which likely involves variants in several genes and functional pathways. Multiway interactions between genes and/or environmental factors may also be important in determining individual risks of myopia, and may help explain the complex pattern of refractive error in human populations. PMID:21155761

  3. A Case Study on the Status of Technical and Vocational Education in Thailand. Case Studies on Technical and Vocational Education in Asia and the Pacific.

    ERIC Educational Resources Information Center

    Shoolap, Charoon; Choomnoom, Siripan

    This two-part report presents an overview of the vocational and technical education system in Thailand. The first part is a review of the current and likely future situations pertaining to vocational and technical education in that country. An analysis of economic conditions, human resource development, and the existing technical and vocational…

  4. Technical writing versus technical writing

    NASA Technical Reports Server (NTRS)

    Dillingham, J. W.

    1981-01-01

    Two terms, two job categories, 'technical writer' and 'technical author' are discussed in terms of industrial and business requirements and standards. A distinction between 'technical writing' and technical 'writing' is made. The term 'technical editor' is also considered. Problems inherent in the design of programs to prepare and train students for these jobs are discussed. A closer alliance between industry and academia is suggested as a means of preparing students with competent technical communication skills (especially writing and editing skills) and good technical skills.

  5. How Do Simulated Error Experiences Impact Attitudes Related to Error Prevention?

    PubMed

    Breitkreuz, Karen R; Dougal, Renae L; Wright, Melanie C

    2016-10-01

    The objective of this project was to determine whether simulated exposure to error situations changes attitudes in a way that may have a positive impact on error prevention behaviors. Using a stratified quasi-randomized experiment design, we compared risk perception attitudes of a control group of nursing students who received standard error education (reviewed medication error content and watched movies about error experiences) to an experimental group of students who reviewed medication error content and participated in simulated error experiences. Dependent measures included perceived memorability of the educational experience, perceived frequency of errors, and perceived caution with respect to preventing errors. Experienced nursing students perceived the simulated error experiences to be more memorable than movies. Less experienced students perceived both simulated error experiences and movies to be highly memorable. After the intervention, compared with movie participants, simulation participants believed errors occurred more frequently. Both types of education increased the participants' intentions to be more cautious and reported caution remained higher than baseline for medication errors 6 months after the intervention. This study provides limited evidence of an advantage of simulation over watching movies describing actual errors with respect to manipulating attitudes related to error prevention. Both interventions resulted in long-term impacts on perceived caution in medication administration. Simulated error experiences made participants more aware of how easily errors can occur, and the movie education made participants more aware of the devastating consequences of errors.

  6. Technical Education--The Key to Sustainable Technological Development

    ERIC Educational Resources Information Center

    Odo, J. U.; Okafor, W. C.; Odo, A. L.; Ejikeugwu, L. N.; Ugwuoke, C. N.

    2017-01-01

    Technical education has been identified as one of the most effective human resource development that needs to be embraced for rapid industrialization and sustainable technological development of any nation. Technical education has been an integral part of national development in many societies because of its impact on productivity and economic…

  7. Aged-related Neural Changes during Memory Conjunction Errors

    PubMed Central

    Giovanello, Kelly S.; Kensinger, Elizabeth A.; Wong, Alana T.; Schacter, Daniel L.

    2013-01-01

    Human behavioral studies demonstrate that healthy aging is often accompanied by increases in memory distortions or errors. Here we used event-related functional MRI to examine the neural basis of age-related memory distortions. We utilized the memory conjunction error paradigm, a laboratory procedure known to elicit high levels of memory errors. For older adults, right parahippocampal gyrus showed significantly greater activity during false than during accurate retrieval. We observed no regions in which activity was greater during false than during accurate retrieval for young adults. Young adults, however, showed significantly greater activity than old adults during accurate retrieval in right hippocampus. By contrast, older adults demonstrated greater activity than young adults during accurate retrieval in right inferior and middle prefrontal cortex. These data are consistent with the notion that age-related memory conjunction errors arise from dysfunction of hippocampal system mechanisms, rather than impairments in frontally-mediated monitoring processes. PMID:19445606

  8. National Aeronautics and Space Administration "threat and error" model applied to pediatric cardiac surgery: error cycles precede ∼85% of patient deaths.

    PubMed

    Hickey, Edward J; Nosikova, Yaroslavna; Pham-Hung, Eric; Gritti, Michael; Schwartz, Steven; Caldarone, Christopher A; Redington, Andrew; Van Arsdell, Glen S

    2015-02-01

    We hypothesized that the National Aeronautics and Space Administration "threat and error" model (which is derived from analyzing >30,000 commercial flights, and explains >90% of crashes) is directly applicable to pediatric cardiac surgery. We implemented a unit-wide performance initiative, whereby every surgical admission constitutes a "flight" and is tracked in real time, with the aim of identifying errors. The first 500 consecutive patients (524 flights) were analyzed, with an emphasis on the relationship between error cycles and permanent harmful outcomes. Among 524 patient flights (risk adjustment for congenital heart surgery category: 1-6; median: 2) 68 (13%) involved residual hemodynamic lesions, 13 (2.5%) permanent end-organ injuries, and 7 deaths (1.3%). Preoperatively, 763 threats were identified in 379 (72%) flights. Only 51% of patient flights (267) were error free. In the remaining 257 flights, 430 errors occurred, most commonly related to proficiency (280; 65%) or judgment (69, 16%). In most flights with errors (173 of 257; 67%), an unintended clinical state resulted, ie, the error was consequential. In 60% of consequential errors (n = 110; 21% of total), subsequent cycles of additional error/unintended states occurred. Cycles, particularly those containing multiple errors, were very significantly associated with permanent harmful end-states, including residual hemodynamic lesions (P < .0001), end-organ injury (P < .0001), and death (P < .0001). Deaths were almost always preceded by cycles (6 of 7; P < .0001). Human error, if not mitigated, often leads to cycles of error and unintended patient states, which are dangerous and precede the majority of harmful outcomes. Efforts to manage threats and error cycles (through crew resource management techniques) are likely to yield large increases in patient safety. Copyright © 2015. Published by Elsevier Inc.

  9. Ergonomic guidelines for using notebook personal computers. Technical Committee on Human-Computer Interaction, International Ergonomics Association.

    PubMed

    Saito, S; Piccoli, B; Smith, M J; Sotoyama, M; Sweitzer, G; Villanueva, M B; Yoshitake, R

    2000-10-01

    In the 1980's, the visual display terminal (VDT) was introduced in workplaces of many countries. Soon thereafter, an upsurge in reported cases of related health problems, such as musculoskeletal disorders and eyestrain, was seen. Recently, the flat panel display or notebook personal computer (PC) became the most remarkable feature in modern workplaces with VDTs and even in homes. A proactive approach must be taken to avert foreseeable ergonomic and occupational health problems from the use of this new technology. Because of its distinct physical and optical characteristics, the ergonomic requirements for notebook PCs in terms of machine layout, workstation design, lighting conditions, among others, should be different from the CRT-based computers. The Japan Ergonomics Society (JES) technical committee came up with a set of guidelines for notebook PC use following exploratory discussions that dwelt on its ergonomic aspects. To keep in stride with this development, the Technical Committee on Human-Computer Interaction under the auspices of the International Ergonomics Association worked towards the international issuance of the guidelines. This paper unveils the result of this collaborative effort.

  10. Error reduction, patient safety and institutional ethics committees.

    PubMed

    Meaney, Mark E

    2004-01-01

    Institutional ethics committees remain largely absent from the literature on error reduction and patient safety. In this paper, the author endeavors to fill the gap. As noted in the Hastings Center's recent report, "Promoting Patient Safety," the occurrence of medical error involves complex web of multiple factors. Human misstep is certainly one such factor, but not the only one. This paper builds on the Hastings Center's report in arguing that institutional ethics committees ought to play an integral role in the transformation of a "culture of blame" to a "culture of safety" in healthcare delivery.

  11. TECHNICAL ADVANCES: Effects of genotyping protocols on success and errors in identifying individual river otters (Lontra canadensis) from their faeces.

    PubMed

    Hansen, Heidi; Ben-David, Merav; McDonald, David B

    2008-03-01

    In noninvasive genetic sampling, when genotyping error rates are high and recapture rates are low, misidentification of individuals can lead to overestimation of population size. Thus, estimating genotyping errors is imperative. Nonetheless, conducting multiple polymerase chain reactions (PCRs) at multiple loci is time-consuming and costly. To address the controversy regarding the minimum number of PCRs required for obtaining a consensus genotype, we compared consumer-style the performance of two genotyping protocols (multiple-tubes and 'comparative method') in respect to genotyping success and error rates. Our results from 48 faecal samples of river otters (Lontra canadensis) collected in Wyoming in 2003, and from blood samples of five captive river otters amplified with four different primers, suggest that use of the comparative genotyping protocol can minimize the number of PCRs per locus. For all but five samples at one locus, the same consensus genotypes were reached with fewer PCRs and with reduced error rates with this protocol compared to the multiple-tubes method. This finding is reassuring because genotyping errors can occur at relatively high rates even in tissues such as blood and hair. In addition, we found that loci that amplify readily and yield consensus genotypes, may still exhibit high error rates (7-32%) and that amplification with different primers resulted in different types and rates of error. Thus, assigning a genotype based on a single PCR for several loci could result in misidentification of individuals. We recommend that programs designed to statistically assign consensus genotypes should be modified to allow the different treatment of heterozygotes and homozygotes intrinsic to the comparative method. © 2007 The Authors.

  12. Direct Geolocation of TerraSAR-X Spotlight Mode Image and Error Correction

    NASA Astrophysics Data System (ADS)

    Zhou, Xiao; Zeng, Qiming; Jiao, Jian; Zhang, Jingfa; Gong, Lixia

    2013-01-01

    The GERMAN TerraSAR-X mission was launched in June 2007, operating a versatile new-generation SAR sensor in X-band. Its Spotlight mode providing SAR images at very high resolution of about 1m. The product’s specified 3-D geolocation accuracy is tightened to 1m according to the official technical report. However, this accuracy is able to be achieved relies on not only robust mathematical basis of SAR geolocation, but also well knowledge of error sources and their correction. The research focuses on geolocation of TerraSAR-X spotlight image. Mathematical model and resolving algorithms have been analyzed. Several error sources have been researched and corrected especially. The effectiveness and accuracy of the research was verified by the experiment results.

  13. Missing data and technical variability in single-cell RNA-sequencing experiments.

    PubMed

    Hicks, Stephanie C; Townes, F William; Teng, Mingxiang; Irizarry, Rafael A

    2017-11-06

    Until recently, high-throughput gene expression technology, such as RNA-Sequencing (RNA-seq) required hundreds of thousands of cells to produce reliable measurements. Recent technical advances permit genome-wide gene expression measurement at the single-cell level. Single-cell RNA-Seq (scRNA-seq) is the most widely used and numerous publications are based on data produced with this technology. However, RNA-seq and scRNA-seq data are markedly different. In particular, unlike RNA-seq, the majority of reported expression levels in scRNA-seq are zeros, which could be either biologically-driven, genes not expressing RNA at the time of measurement, or technically-driven, genes expressing RNA, but not at a sufficient level to be detected by sequencing technology. Another difference is that the proportion of genes reporting the expression level to be zero varies substantially across single cells compared to RNA-seq samples. However, it remains unclear to what extent this cell-to-cell variation is being driven by technical rather than biological variation. Furthermore, while systematic errors, including batch effects, have been widely reported as a major challenge in high-throughput technologies, these issues have received minimal attention in published studies based on scRNA-seq technology. Here, we use an assessment experiment to examine data from published studies and demonstrate that systematic errors can explain a substantial percentage of observed cell-to-cell expression variability. Specifically, we present evidence that some of these reported zeros are driven by technical variation by demonstrating that scRNA-seq produces more zeros than expected and that this bias is greater for lower expressed genes. In addition, this missing data problem is exacerbated by the fact that this technical variation varies cell-to-cell. Then, we show how this technical cell-to-cell variability can be confused with novel biological results. Finally, we demonstrate and discuss how batch

  14. Managing human fallibility in critical aerospace situations

    NASA Astrophysics Data System (ADS)

    Tew, Larry

    2014-11-01

    Human fallibility is pervasive in the aerospace industry with over 50% of errors attributed to human error. Consider the benefits to any organization if those errors were significantly reduced. Aerospace manufacturing involves high value, high profile systems with significant complexity and often repetitive build, assembly, and test operations. In spite of extensive analysis, planning, training, and detailed procedures, human factors can cause unexpected errors. Handling such errors involves extensive cause and corrective action analysis and invariably schedule slips and cost growth. We will discuss success stories, including those associated with electro-optical systems, where very significant reductions in human fallibility errors were achieved after receiving adapted and specialized training. In the eyes of company and customer leadership, the steps used to achieve these results lead to in a major culture change in both the workforce and the supporting management organization. This approach has proven effective in other industries like medicine, firefighting, law enforcement, and aviation. The roadmap to success and the steps to minimize human error are known. They can be used by any organization willing to accept human fallibility and take a proactive approach to incorporate the steps needed to manage and minimize error.

  15. Estimation of chromatic errors from broadband images for high contrast imaging

    NASA Astrophysics Data System (ADS)

    Sirbu, Dan; Belikov, Ruslan

    2015-09-01

    Usage of an internal coronagraph with an adaptive optical system for wavefront correction for direct imaging of exoplanets is currently being considered for many mission concepts, including as an instrument addition to the WFIRST-AFTA mission to follow the James Web Space Telescope. The main technical challenge associated with direct imaging of exoplanets with an internal coronagraph is to effectively control both the diffraction and scattered light from the star so that the dim planetary companion can be seen. For the deformable mirror (DM) to recover a dark hole region with sufficiently high contrast in the image plane, wavefront errors are usually estimated using probes on the DM. To date, most broadband lab demonstrations use narrowband filters to estimate the chromaticity of the wavefront error, but this reduces the photon flux per filter and requires a filter system. Here, we propose a method to estimate the chromaticity of wavefront errors using only a broadband image. This is achieved by using special DM probes that have sufficient chromatic diversity. As a case example, we simulate the retrieval of the spectrum of the central wavelength from broadband images for a simple shaped- pupil coronagraph with a conjugate DM and compute the resulting estimation error.

  16. Differences among Job Positions Related to Communication Errors at Construction Sites

    NASA Astrophysics Data System (ADS)

    Takahashi, Akiko; Ishida, Toshiro

    In a previous study, we classified the communicatio n errors at construction sites as faulty intention and message pattern, inadequate channel pattern, and faulty comprehension pattern. This study seeks to evaluate the degree of risk of communication errors and to investigate differences among people in various job positions in perception of communication error risk . Questionnaires based on the previous study were a dministered to construction workers (n=811; 149 adminis trators, 208 foremen and 454 workers). Administrators evaluated all patterns of communication error risk equally. However, foremen and workers evaluated communication error risk differently in each pattern. The common contributing factors to all patterns wer e inadequate arrangements before work and inadequate confirmation. Some factors were common among patterns but other factors were particular to a specific pattern. To help prevent future accidents at construction sites, administrators should understand how people in various job positions perceive communication errors and propose human factors measures to prevent such errors.

  17. Analysis of the “naming game” with learning errors in communications

    NASA Astrophysics Data System (ADS)

    Lou, Yang; Chen, Guanrong

    2015-07-01

    Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.

  18. At least some errors are randomly generated (Freud was wrong)

    NASA Technical Reports Server (NTRS)

    Sellen, A. J.; Senders, J. W.

    1986-01-01

    An experiment was carried out to expose something about human error generating mechanisms. In the context of the experiment, an error was made when a subject pressed the wrong key on a computer keyboard or pressed no key at all in the time allotted. These might be considered, respectively, errors of substitution and errors of omission. Each of seven subjects saw a sequence of three digital numbers, made an easily learned binary judgement about each, and was to press the appropriate one of two keys. Each session consisted of 1,000 presentations of randomly permuted, fixed numbers broken into 10 blocks of 100. One of two keys should have been pressed within one second of the onset of each stimulus. These data were subjected to statistical analyses in order to probe the nature of the error generating mechanisms. Goodness of fit tests for a Poisson distribution for the number of errors per 50 trial interval and for an exponential distribution of the length of the intervals between errors were carried out. There is evidence for an endogenous mechanism that may best be described as a random error generator. Furthermore, an item analysis of the number of errors produced per stimulus suggests the existence of a second mechanism operating on task driven factors producing exogenous errors. Some errors, at least, are the result of constant probability generating mechanisms with error rate idiosyncratically determined for each subject.

  19. Social deviance activates the brain's error-monitoring system.

    PubMed

    Kim, Bo-Rin; Liss, Alison; Rao, Monica; Singer, Zachary; Compton, Rebecca J

    2012-03-01

    Social psychologists have long noted the tendency for human behavior to conform to social group norms. This study examined whether feedback indicating that participants had deviated from group norms would elicit a neural signal previously shown to be elicited by errors and monetary losses. While electroencephalograms were recorded, participants (N = 30) rated the attractiveness of 120 faces and received feedback giving the purported average rating made by a group of peers. The feedback was manipulated so that group ratings either were the same as a participant's rating or deviated by 1, 2, or 3 points. Feedback indicating deviance from the group norm elicited a feedback-related negativity, a brainwave signal known to be elicited by objective performance errors and losses. The results imply that the brain treats deviance from social norms as an error.

  20. 78 FR 34264 - Technical Corrections to the HIPAA Privacy, Security, and Enforcement Rules

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-07

    ...-AA03 Technical Corrections to the HIPAA Privacy, Security, and Enforcement Rules AGENCY: Office for... corrections address certain inadvertent errors and omissions in the HIPAA Privacy, Security, and Enforcement... (HHS or ``the Department'') published a final rule to implement changes to the HIPAA Privacy, Security...

  1. Prediction of discretization error using the error transport equation

    NASA Astrophysics Data System (ADS)

    Celik, Ismail B.; Parsons, Don Roscoe

    2017-06-01

    This study focuses on an approach to quantify the discretization error associated with numerical solutions of partial differential equations by solving an error transport equation (ETE). The goal is to develop a method that can be used to adequately predict the discretization error using the numerical solution on only one grid/mesh. The primary problem associated with solving the ETE is the formulation of the error source term which is required for accurately predicting the transport of the error. In this study, a novel approach is considered which involves fitting the numerical solution with a series of locally smooth curves and then blending them together with a weighted spline approach. The result is a continuously differentiable analytic expression that can be used to determine the error source term. Once the source term has been developed, the ETE can easily be solved using the same solver that is used to obtain the original numerical solution. The new methodology is applied to the two-dimensional Navier-Stokes equations in the laminar flow regime. A simple unsteady flow case is also considered. The discretization error predictions based on the methodology presented in this study are in good agreement with the 'true error'. While in most cases the error predictions are not quite as accurate as those from Richardson extrapolation, the results are reasonable and only require one numerical grid. The current results indicate that there is much promise going forward with the newly developed error source term evaluation technique and the ETE.

  2. Errors in clinical laboratories or errors in laboratory medicine?

    PubMed

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  3. Asymmetric generalization in adaptation to target displacement errors in humans and in a neural network model.

    PubMed

    Westendorff, Stephanie; Kuang, Shenbing; Taghizadeh, Bahareh; Donchin, Opher; Gail, Alexander

    2015-04-01

    Different error signals can induce sensorimotor adaptation during visually guided reaching, possibly evoking different neural adaptation mechanisms. Here we investigate reach adaptation induced by visual target errors without perturbing the actual or sensed hand position. We analyzed the spatial generalization of adaptation to target error to compare it with other known generalization patterns and simulated our results with a neural network model trained to minimize target error independent of prediction errors. Subjects reached to different peripheral visual targets and had to adapt to a sudden fixed-amplitude displacement ("jump") consistently occurring for only one of the reach targets. Subjects simultaneously had to perform contralateral unperturbed saccades, which rendered the reach target jump unnoticeable. As a result, subjects adapted by gradually decreasing reach errors and showed negative aftereffects for the perturbed reach target. Reach errors generalized to unperturbed targets according to a translational rather than rotational generalization pattern, but locally, not globally. More importantly, reach errors generalized asymmetrically with a skewed generalization function in the direction of the target jump. Our neural network model reproduced the skewed generalization after adaptation to target jump without having been explicitly trained to produce a specific generalization pattern. Our combined psychophysical and simulation results suggest that target jump adaptation in reaching can be explained by gradual updating of spatial motor goal representations in sensorimotor association networks, independent of learning induced by a prediction-error about the hand position. The simulations make testable predictions about the underlying changes in the tuning of sensorimotor neurons during target jump adaptation. Copyright © 2015 the American Physiological Society.

  4. Asymmetric generalization in adaptation to target displacement errors in humans and in a neural network model

    PubMed Central

    Westendorff, Stephanie; Kuang, Shenbing; Taghizadeh, Bahareh; Donchin, Opher

    2015-01-01

    Different error signals can induce sensorimotor adaptation during visually guided reaching, possibly evoking different neural adaptation mechanisms. Here we investigate reach adaptation induced by visual target errors without perturbing the actual or sensed hand position. We analyzed the spatial generalization of adaptation to target error to compare it with other known generalization patterns and simulated our results with a neural network model trained to minimize target error independent of prediction errors. Subjects reached to different peripheral visual targets and had to adapt to a sudden fixed-amplitude displacement (“jump”) consistently occurring for only one of the reach targets. Subjects simultaneously had to perform contralateral unperturbed saccades, which rendered the reach target jump unnoticeable. As a result, subjects adapted by gradually decreasing reach errors and showed negative aftereffects for the perturbed reach target. Reach errors generalized to unperturbed targets according to a translational rather than rotational generalization pattern, but locally, not globally. More importantly, reach errors generalized asymmetrically with a skewed generalization function in the direction of the target jump. Our neural network model reproduced the skewed generalization after adaptation to target jump without having been explicitly trained to produce a specific generalization pattern. Our combined psychophysical and simulation results suggest that target jump adaptation in reaching can be explained by gradual updating of spatial motor goal representations in sensorimotor association networks, independent of learning induced by a prediction-error about the hand position. The simulations make testable predictions about the underlying changes in the tuning of sensorimotor neurons during target jump adaptation. PMID:25609106

  5. Exploring Senior Residents' Intraoperative Error Management Strategies: A Potential Measure of Performance Improvement.

    PubMed

    Law, Katherine E; Ray, Rebecca D; D'Angelo, Anne-Lise D; Cohen, Elaine R; DiMarco, Shannon M; Linsmeier, Elyse; Wiegmann, Douglas A; Pugh, Carla M

    reflect residents' error management success in the earlier stages, which allowed further progression in the second simulation. Incorporating error recognition and management opportunities into surgical training could help track residents' learning curve and provide detailed, structured feedback on technical and decision-making skills. Copyright © 2016 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  6. Human Reliability and the Cost of Doing Business

    NASA Technical Reports Server (NTRS)

    DeMott, Diana

    2014-01-01

    Most businesses recognize that people will make mistakes and assume errors are just part of the cost of doing business, but does it need to be? Companies with high risk, or major consequences, should consider the effect of human error. In a variety of industries, Human Errors have caused costly failures and workplace injuries. These have included: airline mishaps, medical malpractice, administration of medication and major oil spills have all been blamed on human error. A technique to mitigate or even eliminate some of these costly human errors is the use of Human Reliability Analysis (HRA). Various methodologies are available to perform Human Reliability Assessments that range from identifying the most likely areas for concern to detailed assessments with human error failure probabilities calculated. Which methodology to use would be based on a variety of factors that would include: 1) how people react and act in different industries, and differing expectations based on industries standards, 2) factors that influence how the human errors could occur such as tasks, tools, environment, workplace, support, training and procedure, 3) type and availability of data and 4) how the industry views risk & reliability influences ( types of emergencies, contingencies and routine tasks versus cost based concerns). The Human Reliability Assessments should be the first step to reduce, mitigate or eliminate the costly mistakes or catastrophic failures. Using Human Reliability techniques to identify and classify human error risks allows a company more opportunities to mitigate or eliminate these risks and prevent costly failures.

  7. Supporting the human life-raft in confronting the juggernaut of technology: Jens Rasmussen, 1961-1986.

    PubMed

    Kant, Vivek

    2017-03-01

    Jens Rasmussen's contribution to the field of human factors and ergonomics has had a lasting impact. Six prominent interrelated themes can be extracted from his research between 1961 and 1986. These themes form the basis of an engineering epistemology which is best manifested by his abstraction hierarchy. Further, Rasmussen reformulated technical reliability using systems language to enable a proper human-machine fit. To understand the concept of human-machine fit, he included the operator as a central component in the system to enhance system safety. This change resulted in the application of a qualitative and categorical approach for human-machine interaction design. Finally, Rasmussen's insistence on a working philosophy of systems design as being a joint responsibility of operators and designers provided the basis for averting errors and ensuring safe and correct system functioning. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. A methodology for translating positional error into measures of attribute error, and combining the two error sources

    Treesearch

    Yohay Carmel; Curtis Flather; Denis Dean

    2006-01-01

    This paper summarizes our efforts to investigate the nature, behavior, and implications of positional error and attribute error in spatiotemporal datasets. Estimating the combined influence of these errors on map analysis has been hindered by the fact that these two error types are traditionally expressed in different units (distance units, and categorical units,...

  9. Defining health information technology-related errors: new developments since to err is human.

    PubMed

    Sittig, Dean F; Singh, Hardeep

    2011-07-25

    Despite the promise of health information technology (HIT), recent literature has revealed possible safety hazards associated with its use. The Office of the National Coordinator for HIT recently sponsored an Institute of Medicine committee to synthesize evidence and experience from the field on how HIT affects patient safety. To lay the groundwork for defining, measuring, and analyzing HIT-related safety hazards, we propose that HIT-related error occurs anytime HIT is unavailable for use, malfunctions during use, is used incorrectly by someone, or when HIT interacts with another system component incorrectly, resulting in data being lost or incorrectly entered, displayed, or transmitted. These errors, or the decisions that result from them, significantly increase the risk of adverse events and patient harm. We describe how a sociotechnical approach can be used to understand the complex origins of HIT errors, which may have roots in rapidly evolving technological, professional, organizational, and policy initiatives.

  10. Quantum error correction for continuously detected errors with any number of error channels per qubit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahn, Charlene; Wiseman, Howard; Jacobs, Kurt

    2004-08-01

    It was shown by Ahn, Wiseman, and Milburn [Phys. Rev. A 67, 052310 (2003)] that feedback control could be used as a quantum error correction process for errors induced by weak continuous measurement, given one perfectly measured error channel per qubit. Here we point out that this method can be easily extended to an arbitrary number of error channels per qubit. We show that the feedback protocols generated by our method encode n-2 logical qubits in n physical qubits, thus requiring just one more physical qubit than in the previous case.

  11. The evidence of the rugoscopy effectiveness as a human identification method in patients submitted to rapid palatal expansion.

    PubMed

    Barbieri, Ana A; Scoralick, Raquel A; Naressi, Suely C M; Moraes, Mari E L; Daruge, Eduardo; Daruge, Eduardo

    2013-01-01

    The objective of this study was to demonstrate the effectiveness of rugoscopy as a human identification method, even when the patient is submitted to rapid palatal expansion, which in theory would introduce doubt. With this intent, the Rugoscopic Identity was obtained for each subject using the classification formula proposed by Santos based on the intra-oral casts made before and after treatment from patients who were subjected to palatal expansion. The casts were labeled with the patients' initials and randomly arranged for studying. The palatine rugae kept the same patterns in every case studied. The technical error of the intra-evaluator measurement provided a confidence interval of 95%, making rugoscopy a reliable identification method for patients who were submitted to rapid palatal expansion, because even in the presence of intra-oral changes owing to the use of palatal expanders, the palatine rugae retained the biological and technical requirements for the human identification process. © 2012 American Academy of Forensic Sciences.

  12. Development and validation of Aviation Causal Contributors for Error Reporting Systems (ACCERS).

    PubMed

    Baker, David P; Krokos, Kelley J

    2007-04-01

    This investigation sought to develop a reliable and valid classification system for identifying and classifying the underlying causes of pilot errors reported under the Aviation Safety Action Program (ASAP). ASAP is a voluntary safety program that air carriers may establish to study pilot and crew performance on the line. In ASAP programs, similar to the Aviation Safety Reporting System, pilots self-report incidents by filing a short text description of the event. The identification of contributors to errors is critical if organizations are to improve human performance, yet it is difficult for analysts to extract this information from text narratives. A taxonomy was needed that could be used by pilots to classify the causes of errors. After completing a thorough literature review, pilot interviews and a card-sorting task were conducted in Studies 1 and 2 to develop the initial structure of the Aviation Causal Contributors for Event Reporting Systems (ACCERS) taxonomy. The reliability and utility of ACCERS was then tested in studies 3a and 3b by having pilots independently classify the primary and secondary causes of ASAP reports. The results provided initial evidence for the internal and external validity of ACCERS. Pilots were found to demonstrate adequate levels of agreement with respect to their category classifications. ACCERS appears to be a useful system for studying human error captured under pilot ASAP reports. Future work should focus on how ACCERS is organized and whether it can be used or modified to classify human error in ASAP programs for other aviation-related job categories such as dispatchers. Potential applications of this research include systems in which individuals self-report errors and that attempt to extract and classify the causes of those events.

  13. Optimizing the NASA Technical Report Server

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.; Maa, Ming-Hokng

    1996-01-01

    The NASA Technical Report Server (NTRS), a World Wide Web report distribution NASA technical publications service, is modified for performance enhancement, greater protocol support, and human interface optimization. Results include: Parallel database queries, significantly decreasing user access times by an average factor of 2.3; access from clients behind firewalls and/ or proxies which truncate excessively long Uniform Resource Locators (URLs); access to non-Wide Area Information Server (WAIS) databases and compatibility with the 239-50.3 protocol; and a streamlined user interface.

  14. Study of style effects on OCR errors in the MEDLINE database

    NASA Astrophysics Data System (ADS)

    Garrison, Penny; Davis, Diane L.; Andersen, Tim L.; Barney Smith, Elisa H.

    2005-01-01

    The National Library of Medicine has developed a system for the automatic extraction of data from scanned journal articles to populate the MEDLINE database. Although the 5-engine OCR system used in this process exhibits good performance overall, it does make errors in character recognition that must be corrected in order for the process to achieve the requisite accuracy. The correction process works by feeding words that have characters with less than 100% confidence (as determined automatically by the OCR engine) to a human operator who then must manually verify the word or correct the error. The majority of these errors are contained in the affiliation information zone where the characters are in italics or small fonts. Therefore only affiliation information data is used in this research. This paper examines the correlation between OCR errors and various character attributes in the MEDLINE database, such as font size, italics, bold, etc. and OCR confidence levels. The motivation for this research is that if a correlation between the character style and types of errors exists it should be possible to use this information to improve operator productivity by increasing the probability that the correct word option is presented to the human editor. We have determined that this correlation exists, in particular for the case of characters with diacritics.

  15. Human Reliability and the Cost of Doing Business

    NASA Technical Reports Server (NTRS)

    DeMott, D. L.

    2014-01-01

    Human error cannot be defined unambiguously in advance of it happening, it often becomes an error after the fact. The same action can result in a tragic accident for one situation or a heroic action given a more favorable outcome. People often forget that we employ humans in business and industry for the flexibility and capability to change when needed. In complex systems, operations are driven by their specifications of the system and the system structure. People provide the flexibility to make it work. Human error has been reported as being responsible for 60%-80% of failures, accidents and incidents in high-risk industries. We don't have to accept that all human errors are inevitable. Through the use of some basic techniques, many potential human error events can be addressed. There are actions that can be taken to reduce the risk of human error.

  16. Inherent safety, ethics and human error.

    PubMed

    Papadaki, Maria

    2008-02-11

    stated. The reason this article is presented here is that I believe that often, complex accidents, similarly to insignificant ones, often demonstrate an attitude which can be characterized as "inherently unsafe". I take the view that the enormous human potential and the human ability to minimize accidents needs to become a focal point towards inherent safety. Restricting ourselves to human limitations and how we could "treat" or prevent humans from not making accidents needs to be re-addressed. The purpose of this presentation is to highlight observations and provoke a discussion on how we could possibly improve the understanding of safety related issues. I do not intent to reject or criticize existing methodologies. (The entire presentation is strongly influenced by Trevor Kletz's work although our views are often different.).

  17. Human Factors Throughout the Life Cycle: Lessons Learned from the Shuttle Program. [Human Factors in Ground Processing

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.

    2011-01-01

    With the ending of the Space Shuttle Program, it is critical that we not forget the Human Factors lessons we have learned over the years. At every phase of the life cycle, from manufacturing, processing and integrating vehicle and payload, to launch, flight operations, mission control and landing, hundreds of teams have worked together to achieve mission success in one of the most complex, high-risk socio-technical enterprises ever designed. Just as there was great diversity in the types of operations performed at every stage, there was a myriad of human factors that could further complicate these human systems. A single mishap or close call could point to issues at the individual level (perceptual or workload limitations, training, fatigue, human error susceptibilities), the task level (design of tools, procedures and aspects of the workplace), as well as the organizational level (appropriate resources, safety policies, information access and communication channels). While we have often had to learn through human mistakes and technological failures, we have also begun to understand how to design human systems in which individuals can excel, where tasks and procedures are not only safe but efficient, and how organizations can foster a proactive approach to managing risk and supporting human enterprises. Panelists will talk about their experiences as they relate human factors to a particular phase of the shuttle life cycle. They will conclude with a framework for tying together human factors lessons-learned into system-level risk management strategies.

  18. Technical and Vocational Education in Nigeria: Issues, Challenges and a Way Forward

    ERIC Educational Resources Information Center

    Okoye, Reko; Arimonu, Maxwell Onyenwe

    2016-01-01

    Technical education, as enshrined in the Nigerian national policy on education, is concerned with qualitative technological human resources development directed towards a national pool of skilled and self reliant craftsmen, technicians and technologists in technical and vocational education fields. In Nigeria, the training of technical personnel…

  19. Implementing parallel spreadsheet models for health policy decisions: The impact of unintentional errors on model projections.

    PubMed

    Bailey, Stephanie L; Bono, Rose S; Nash, Denis; Kimmel, April D

    2018-01-01

    Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Standard error-checking techniques may not

  20. Functional requirements for the man-vehicle systems research facility. [identifying and correcting human errors during flight simulation

    NASA Technical Reports Server (NTRS)

    Clement, W. F.; Allen, R. W.; Heffley, R. K.; Jewell, W. F.; Jex, H. R.; Mcruer, D. T.; Schulman, T. M.; Stapleford, R. L.

    1980-01-01

    The NASA Ames Research Center proposed a man-vehicle systems research facility to support flight simulation studies which are needed for identifying and correcting the sources of human error associated with current and future air carrier operations. The organization of research facility is reviewed and functional requirements and related priorities for the facility are recommended based on a review of potentially critical operational scenarios. Requirements are included for the experimenter's simulation control and data acquisition functions, as well as for the visual field, motion, sound, computation, crew station, and intercommunications subsystems. The related issues of functional fidelity and level of simulation are addressed, and specific criteria for quantitative assessment of various aspects of fidelity are offered. Recommendations for facility integration, checkout, and staffing are included.

  1. The use of a supplemental sulcus fixated IOL (HumanOptics Add-On IOL) to correct pseudophakic refractive errors.

    PubMed

    Basarir, Berna; Kaya, Vedat; Altan, Cigdem; Karakus, Sezen; Pinarci, Eylem Y; Demirok, Ahmet

    2012-01-01

    To evaluate the safety and efficacy of piggybacking with the HumanOptics Add-On intraocular lens (IOL) to correct pseudophakic refractive errors. Ten eyes of 10 patients with pseudophakic refractive errors were included in this study. All patients were targeted for a range of refraction -0.50 to +0.50 D. Uncorrected and corrected distance visual acuities (UDVA and CDVA, respectively), endothelial cell count (ECC), anterior chamber depth (ACD), the distance between intraocular lenses, and contrast sensitivity measurements under mesopic, scotopic, and scotopic with glare conditions were evaluated preoperatively and postoperatively. The mean age of the patients was 54±27 years (range 4-78). Mean follow-up time was 10.5±1.36 months (range 6-15 months). Mean diopters of implanted Add-On IOLs were -1.4±6.9 (range -12 to +9 D). Mean preoperative and postoperative UDVA was 0.133±0.12 and 0.73±0.27, respectively (p=0.0001); mean preoperative and postoperative CDVA were 0.77±0.26 and 0.79±0.27, respectively (p=0.066). Mean preoperative and postoperative ACD were 3.87±0.91 mm vs 3.58±1.05 mm, respectively (p=0.343); mean inter-IOL distance was 0.53±0.08 mm. Mean preoperative and postoperative ECC were 2455±302 and 2426±294, respectively (p=0.55). All patients were within the targeted refractive range of -0.50 D to +0.50 D. No complications were observed during the operations or postoperative follow-up period. Piggybacking with the Add-On IOL is a safe, efficient, and reliable technique to correct pseudophakic refractive errors.

  2. Performance Data Errors in Air Carrier Operations: Causes and Countermeasures

    NASA Technical Reports Server (NTRS)

    Berman, Benjamin A.; Dismukes, R Key; Jobe, Kimberly K.

    2012-01-01

    Several airline accidents have occurred in recent years as the result of erroneous weight or performance data used to calculate V-speeds, flap/trim settings, required runway lengths, and/or required climb gradients. In this report we consider 4 recent studies of performance data error, report our own study of ASRS-reported incidents, and provide countermeasures that can reduce vulnerability to accidents caused by performance data errors. Performance data are generated through a lengthy process involving several employee groups and computer and/or paper-based systems. Although much of the airline indUStry 's concern has focused on errors pilots make in entering FMS data, we determined that errors occur at every stage of the process and that errors by ground personnel are probably at least as frequent and certainly as consequential as errors by pilots. Most of the errors we examined could in principle have been trapped by effective use of existing procedures or technology; however, the fact that they were not trapped anywhere indicates the need for better countermeasures. Existing procedures are often inadequately designed to mesh with the ways humans process information. Because procedures often do not take into account the ways in which information flows in actual flight ops and time pressures and interruptions experienced by pilots and ground personnel, vulnerability to error is greater. Some aspects of NextGen operations may exacerbate this vulnerability. We identify measures to reduce the number of errors and to help catch the errors that occur.

  3. Quantum error-correction failure distributions: Comparison of coherent and stochastic error models

    NASA Astrophysics Data System (ADS)

    Barnes, Jeff P.; Trout, Colin J.; Lucarelli, Dennis; Clader, B. D.

    2017-06-01

    We compare failure distributions of quantum error correction circuits for stochastic errors and coherent errors. We utilize a fully coherent simulation of a fault-tolerant quantum error correcting circuit for a d =3 Steane and surface code. We find that the output distributions are markedly different for the two error models, showing that no simple mapping between the two error models exists. Coherent errors create very broad and heavy-tailed failure distributions. This suggests that they are susceptible to outlier events and that mean statistics, such as pseudothreshold estimates, may not provide the key figure of merit. This provides further statistical insight into why coherent errors can be so harmful for quantum error correction. These output probability distributions may also provide a useful metric that can be utilized when optimizing quantum error correcting codes and decoding procedures for purely coherent errors.

  4. 45 CFR 1168.205 - Professional and technical services.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 3 2011-10-01 2011-10-01 false Professional and technical services. 1168.205 Section 1168.205 Public Welfare Regulations Relating to Public Welfare (Continued) NATIONAL FOUNDATION ON THE ARTS AND THE HUMANITIES NATIONAL ENDOWMENT FOR THE HUMANITIES NEW RESTRICTIONS ON LOBBYING...

  5. 45 CFR 1168.300 - Professional and technical services.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 3 2011-10-01 2011-10-01 false Professional and technical services. 1168.300 Section 1168.300 Public Welfare Regulations Relating to Public Welfare (Continued) NATIONAL FOUNDATION ON THE ARTS AND THE HUMANITIES NATIONAL ENDOWMENT FOR THE HUMANITIES NEW RESTRICTIONS ON LOBBYING...

  6. 45 CFR 1168.300 - Professional and technical services.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 3 2010-10-01 2010-10-01 false Professional and technical services. 1168.300 Section 1168.300 Public Welfare Regulations Relating to Public Welfare (Continued) NATIONAL FOUNDATION ON THE ARTS AND THE HUMANITIES NATIONAL ENDOWMENT FOR THE HUMANITIES NEW RESTRICTIONS ON LOBBYING...

  7. 45 CFR 1168.205 - Professional and technical services.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 3 2010-10-01 2010-10-01 false Professional and technical services. 1168.205 Section 1168.205 Public Welfare Regulations Relating to Public Welfare (Continued) NATIONAL FOUNDATION ON THE ARTS AND THE HUMANITIES NATIONAL ENDOWMENT FOR THE HUMANITIES NEW RESTRICTIONS ON LOBBYING...

  8. Optimized universal color palette design for error diffusion

    NASA Astrophysics Data System (ADS)

    Kolpatzik, Bernd W.; Bouman, Charles A.

    1995-04-01

    Currently, many low-cost computers can only simultaneously display a palette of 256 color. However, this palette is usually selectable from a very large gamut of available colors. For many applications, this limited palette size imposes a significant constraint on the achievable image quality. We propose a method for designing an optimized universal color palette for use with halftoning methods such as error diffusion. The advantage of a universal color palette is that it is fixed and therefore allows multiple images to be displayed simultaneously. To design the palette, we employ a new vector quantization method known as sequential scalar quantization (SSQ) to allocate the colors in a visually uniform color space. The SSQ method achieves near-optimal allocation, but may be efficiently implemented using a series of lookup tables. When used with error diffusion, SSQ adds little computational overhead and may be used to minimize the visual error in an opponent color coordinate system. We compare the performance of the optimized algorithm to standard error diffusion by evaluating a visually weighted mean-squared-error measure. Our metric is based on the color difference in CIE L*AL*B*, but also accounts for the lowpass characteristic of human contrast sensitivity.

  9. Error coding simulations

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1993-01-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  10. [Diagnostic Errors in Medicine].

    PubMed

    Buser, Claudia; Bankova, Andriyana

    2015-12-09

    The recognition of diagnostic errors in everyday practice can help improve patient safety. The most common diagnostic errors are the cognitive errors, followed by system-related errors and no fault errors. The cognitive errors often result from mental shortcuts, known as heuristics. The rate of cognitive errors can be reduced by a better understanding of heuristics and the use of checklists. The autopsy as a retrospective quality assessment of clinical diagnosis has a crucial role in learning from diagnostic errors. Diagnostic errors occur more often in primary care in comparison to hospital settings. On the other hand, the inpatient errors are more severe than the outpatient errors.

  11. Restrictions on surgical resident shift length does not impact type of medical errors.

    PubMed

    Anderson, Jamie E; Goodman, Laura F; Jensen, Guy W; Salcedo, Edgardo S; Galante, Joseph M

    2017-05-15

    In 2011, resident duty hours were restricted in an attempt to improve patient safety and resident education. With the goal of reducing fatigue, shorter shift length leads to more patient handoffs, raising concerns about adverse effects on patient safety. This study seeks to determine whether differences in duty-hour restrictions influence types of errors made by residents. This is a nested retrospective cohort study at a surgery department in an academic medical center. During 2013-14, standard 2011 duty hours were in place for residents. In 2014-15, duty-hour restrictions at the study site were relaxed ("flexible") with no restrictions on shift length. We reviewed all morbidity and mortality submissions from July 1, 2013-June 30, 2015 and compared differences in types of errors between these periods. A total of 383 patients experienced adverse events, including 59 deaths (15.4%). Comparing standard versus flexible periods, there was no difference in mortality (15.7% versus 12.6%, P = 0.479) or complication rates (2.6% versus 2.5%, P = 0.696). There was no difference in types of errors between periods (P = 0.050-0.808). The most number of errors were due to cognitive failures (229, 59.6%), whereas the fewest number of errors were due to team failure (127, 33.2%). By subset, technical errors resulted in the highest number of errors (169, 44.1%). There were no differences between types of errors of cases that were nonelective, at night, or involving residents. Among adverse events reported in this departmental surgical morbidity and mortality, there were no differences in types of errors when resident duty hours were less restrictive. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Common errors of drug administration in infants: causes and avoidance.

    PubMed

    Anderson, B J; Ellis, J F

    1999-01-01

    Drug administration errors are common in infants. Although the infant population has a high exposure to drugs, there are few data concerning pharmacokinetics or pharmacodynamics, or the influence of paediatric diseases on these processes. Children remain therapeutic orphans. Formulations are often suitable only for adults; in addition, the lack of maturation of drug elimination processes, alteration of body composition and influence of size render the calculation of drug doses complex in infants. The commonest drug administration error in infants is one of dose, and the commonest hospital site for this error is the intensive care unit. Drug errors are a consequence of system error, and preventive strategies are possible through system analysis. The goal of a zero drug error rate should be aggressively sought, with systems in place that aim to eliminate the effects of inevitable human error. This involves review of the entire system from drug manufacture to drug administration. The nuclear industry, telecommunications and air traffic control services all practise error reduction policies with zero error as a clear goal, not by finding fault in the individual, but by identifying faults in the system and building into that system mechanisms for picking up faults before they occur. Such policies could be adapted to medicine using interventions both specific (the production of formulations which are for children only and clearly labelled, regular audit by pharmacists, legible prescriptions, standardised dose tables) and general (paediatric drug trials, education programmes, nonpunitive error reporting) to reduce the number of errors made in giving medication to infants.

  13. Error quantification of osteometric data in forensic anthropology.

    PubMed

    Langley, Natalie R; Meadows Jantz, Lee; McNulty, Shauna; Maijanen, Heli; Ousley, Stephen D; Jantz, Richard L

    2018-06-01

    This study evaluates the reliability of osteometric data commonly used in forensic case analyses, with specific reference to the measurements in Data Collection Procedures 2.0 (DCP 2.0). Four observers took a set of 99 measurements four times on a sample of 50 skeletons (each measurement was taken 200 times by each observer). Two-way mixed ANOVAs and repeated measures ANOVAs with pairwise comparisons were used to examine interobserver (between-subjects) and intraobserver (within-subjects) variability. Relative technical error of measurement (TEM) was calculated for measurements with significant ANOVA results to examine the error among a single observer repeating a measurement multiple times (e.g. repeatability or intraobserver error), as well as the variability between multiple observers (interobserver error). Two general trends emerged from these analyses: (1) maximum lengths and breadths have the lowest error across the board (TEM<0.5), and (2) maximum and minimum diameters at midshaft are more reliable than their positionally-dependent counterparts (i.e. sagittal, vertical, transverse, dorso-volar). Therefore, maxima and minima are specified for all midshaft measurements in DCP 2.0. Twenty-two measurements were flagged for excessive variability (either interobserver, intraobserver, or both); 15 of these measurements were part of the standard set of measurements in Data Collection Procedures for Forensic Skeletal Material, 3rd edition. Each measurement was examined carefully to determine the likely source of the error (e.g. data input, instrumentation, observer's method, or measurement definition). For several measurements (e.g. anterior sacral breadth, distal epiphyseal breadth of the tibia) only one observer differed significantly from the remaining observers, indicating a likely problem with the measurement definition as interpreted by that observer; these definitions were clarified in DCP 2.0 to eliminate this confusion. Other measurements were taken from

  14. Technical writing in America: A historical perspective

    NASA Technical Reports Server (NTRS)

    Connaughton, M. E.

    1981-01-01

    The standard distinction between poetic and referential language, the gulf between science and the humanities, and the distress many teachers of English feel when faced for the first time with the prospect of teaching technical writing are discussed. In the introduction of many technical writing textbooks. Technical communication is divorced from other forms of linguistic experience by making language limiting and reductive rather than creative and expansive. The emphasis on technical/scientific writing as radically different had blinded people to those traits it has in common with all species of composition and has led to a neglect of research, on fundamental rhetorical issues. A complete rhetorical theory of technical discourse should include information about the attitudes and motives of writers, the situations which motivate (or coerce) them to write, definitive features of technical style and form, interrelationship of expression and creativity, and functions of communication in shaping and preserving scientific networds and institutions. The previous areas should be explored with respect to contemporary practice and within an historical perspective.

  15. An fMRI and effective connectivity study investigating miss errors during advice utilization from human and machine agents.

    PubMed

    Goodyear, Kimberly; Parasuraman, Raja; Chernyak, Sergey; de Visser, Ewart; Madhavan, Poornima; Deshpande, Gopikrishna; Krueger, Frank

    2017-10-01

    As society becomes more reliant on machines and automation, understanding how people utilize advice is a necessary endeavor. Our objective was to reveal the underlying neural associations during advice utilization from expert human and machine agents with fMRI and multivariate Granger causality analysis. During an X-ray luggage-screening task, participants accepted or rejected good or bad advice from either the human or machine agent framed as experts with manipulated reliability (high miss rate). We showed that the machine-agent group decreased their advice utilization compared to the human-agent group and these differences in behaviors during advice utilization could be accounted for by high expectations of reliable advice and changes in attention allocation due to miss errors. Brain areas involved with the salience and mentalizing networks, as well as sensory processing involved with attention, were recruited during the task and the advice utilization network consisted of attentional modulation of sensory information with the lingual gyrus as the driver during the decision phase and the fusiform gyrus as the driver during the feedback phase. Our findings expand on the existing literature by showing that misses degrade advice utilization, which is represented in a neural network involving salience detection and self-processing with perceptual integration.

  16. An MEG signature corresponding to an axiomatic model of reward prediction error.

    PubMed

    Talmi, Deborah; Fuentemilla, Lluis; Litvak, Vladimir; Duzel, Emrah; Dolan, Raymond J

    2012-01-02

    Optimal decision-making is guided by evaluating the outcomes of previous decisions. Prediction errors are theoretical teaching signals which integrate two features of an outcome: its inherent value and prior expectation of its occurrence. To uncover the magnetic signature of prediction errors in the human brain we acquired magnetoencephalographic (MEG) data while participants performed a gambling task. Our primary objective was to use formal criteria, based upon an axiomatic model (Caplin and Dean, 2008a), to determine the presence and timing profile of MEG signals that express prediction errors. We report analyses at the sensor level, implemented in SPM8, time locked to outcome onset. We identified, for the first time, a MEG signature of prediction error, which emerged approximately 320 ms after an outcome and expressed as an interaction between outcome valence and probability. This signal followed earlier, separate signals for outcome valence and probability, which emerged approximately 200 ms after an outcome. Strikingly, the time course of the prediction error signal, as well as the early valence signal, resembled the Feedback-Related Negativity (FRN). In simultaneously acquired EEG data we obtained a robust FRN, but the win and loss signals that comprised this difference wave did not comply with the axiomatic model. Our findings motivate an explicit examination of the critical issue of timing embodied in computational models of prediction errors as seen in human electrophysiological data. Copyright © 2011 Elsevier Inc. All rights reserved.

  17. 78 FR 11237 - Public Hearing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-15

    ... management of human error in its operations and system safety programs, and the status of PTC implementation... UP's safety management policies and programs associated with human error, operational accident and... Chairman of the Board of Inquiry 2. Introduction of the Board of Inquiry and Technical Panel 3...

  18. Study on relationship of performance shaping factor in human error probability with prevalent stress of PUSPATI TRIGA reactor operators

    NASA Astrophysics Data System (ADS)

    Rahim, Ahmad Nabil Bin Ab; Mohamed, Faizal; Farid, Mohd Fairus Abdul; Fazli Zakaria, Mohd; Sangau Ligam, Alfred; Ramli, Nurhayati Binti

    2018-01-01

    Human factor can be affected by prevalence stress measured using Depression, Anxiety and Stress Scale (DASS). From the respondents feedback can be summarized that the main factor causes the highest prevalence stress is due to the working conditions that require operators to handle critical situation and make a prompt critical decisions. The relationship between the prevalence stress and performance shaping factors found that PSFFitness and PSFWork Process showed positive Pearson’s Correlation with the score of .763 and .826 while the level of significance, p = .028 and p = .012. These positive correlations with good significant values between prevalence stress and human performance shaping factor (PSF) related to fitness, work processes and procedures. The higher the stress level of the respondents, the higher the score of selected for the PSFs. This is due to the higher levels of stress lead to deteriorating physical health and cognitive also worsened. In addition, the lack of understanding in the work procedures can also be a factor that causes a growing stress. The higher these values will lead to the higher the probabilities of human error occur. Thus, monitoring the level of stress among operators RTP is important to ensure the safety of RTP.

  19. Characteristics of pediatric chemotherapy medication errors in a national error reporting database.

    PubMed

    Rinke, Michael L; Shore, Andrew D; Morlock, Laura; Hicks, Rodney W; Miller, Marlene R

    2007-07-01

    Little is known regarding chemotherapy medication errors in pediatrics despite studies suggesting high rates of overall pediatric medication errors. In this study, the authors examined patterns in pediatric chemotherapy errors. The authors queried the United States Pharmacopeia MEDMARX database, a national, voluntary, Internet-accessible error reporting system, for all error reports from 1999 through 2004 that involved chemotherapy medications and patients aged <18 years. Of the 310 pediatric chemotherapy error reports, 85% reached the patient, and 15.6% required additional patient monitoring or therapeutic intervention. Forty-eight percent of errors originated in the administering phase of medication delivery, and 30% originated in the drug-dispensing phase. Of the 387 medications cited, 39.5% were antimetabolites, 14.0% were alkylating agents, 9.3% were anthracyclines, and 9.3% were topoisomerase inhibitors. The most commonly involved chemotherapeutic agents were methotrexate (15.3%), cytarabine (12.1%), and etoposide (8.3%). The most common error types were improper dose/quantity (22.9% of 327 cited error types), wrong time (22.6%), omission error (14.1%), and wrong administration technique/wrong route (12.2%). The most common error causes were performance deficit (41.3% of 547 cited error causes), equipment and medication delivery devices (12.4%), communication (8.8%), knowledge deficit (6.8%), and written order errors (5.5%). Four of the 5 most serious errors occurred at community hospitals. Pediatric chemotherapy errors often reached the patient, potentially were harmful, and differed in quality between outpatient and inpatient areas. This study indicated which chemotherapeutic agents most often were involved in errors and that administering errors were common. Investigation is needed regarding targeted medication administration safeguards for these high-risk medications. Copyright (c) 2007 American Cancer Society.

  20. The use of a checklist improves anaesthesiologists' technical and non-technical performance for simulated malignant hyperthermia management.

    PubMed

    Hardy, Jean-Baptiste; Gouin, Antoine; Damm, Cédric; Compère, Vincent; Veber, Benoît; Dureuil, Bertrand

    2018-02-01

    Anaesthesiologists may occasionally manage life-threatening operating room (OR) emergencies. Managing OR emergencies implies real-time analysis of often complicated situations, prompt medical knowledge retrieval, coordinated teamwork and effective decision making in stressful settings. Checklists are recommended to improve performance and reduce the risk of medical errors. This study aimed to assess the usefulness of the French Society of Anaesthesia and Intensive Care's (SFAR) "Malignant Hyperthermia" (MH) checklist on a simulated episode of MH crisis and management thereof by registered anesthesiologists. Twenty-four anaesthesiologists were allocated to 2 groups (checklist and control). Their technical performance in adherence with the SFAR guidelines was assessed by a 30-point score and their non-technical performance was assessed by the Anaesthetists' Non-Technical Skills (ANTS) score. Every task completion was assessed independently. Data are shown as median (first-third quartiles). Anaesthesiologists in the checklist group had higher technical performance scores (24/30 (21.5-25) vs 18/30 (15.5-19.5), P=0.002) and ANTS scores (56.5/60 (47.5-58) vs 48.5/60 (41-50.5), P=0.024). They administered the complete initial dose of dantrolene (2mg/kg) more quickly (15.7 minutes [13.9-18.3] vs 22.4 minutes [18.6-25]) than the control group (P=0.017). However, anaesthesiologists deemed the usability of the checklist to be perfectible. Registered anaesthesiologists' use of the MH checklist during a simulation session widely improved their adherence to guidelines and non-technical skills. This study strongly suggests the benefit of checklist tools for emergency management. Notwithstanding, better awareness and training for anaesthesiologists could further improve the use of this tool. Copyright © 2017 Société française d'anesthésie et de réanimation (Sfar). Published by Elsevier Masson SAS. All rights reserved.

  1. Commentary: Reducing diagnostic errors: another role for checklists?

    PubMed

    Winters, Bradford D; Aswani, Monica S; Pronovost, Peter J

    2011-03-01

    Diagnostic errors are a widespread problem, although the true magnitude is unknown because they cannot currently be measured validly. These errors have received relatively little attention despite alarming estimates of associated harm and death. One promising intervention to reduce preventable harm is the checklist. This intervention has proven successful in aviation, in which situations are linear and deterministic (one alarm goes off and a checklist guides the flight crew to evaluate the cause). In health care, problems are multifactorial and complex. A checklist has been used to reduce central-line-associated bloodstream infections in intensive care units. Nevertheless, this checklist was incorporated in a culture-based safety program that engaged and changed behaviors and used robust measurement of infections to evaluate progress. In this issue, Ely and colleagues describe how three checklists could reduce the cognitive biases and mental shortcuts that underlie diagnostic errors, but point out that these tools still need to be tested. To be effective, they must reduce diagnostic errors (efficacy) and be routinely used in practice (effectiveness). Such tools must intuitively support how the human brain works, and under time pressures, clinicians rarely think in conditional probabilities when making decisions. To move forward, it is necessary to accurately measure diagnostic errors (which could come from mapping out the diagnostic process as the medication process has done and measuring errors at each step) and pilot test interventions such as these checklists to determine whether they work.

  2. Errors in Aviation Decision Making: Bad Decisions or Bad Luck?

    NASA Technical Reports Server (NTRS)

    Orasanu, Judith; Martin, Lynne; Davison, Jeannie; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    Despite efforts to design systems and procedures to support 'correct' and safe operations in aviation, errors in human judgment still occur and contribute to accidents. In this paper we examine how an NDM (naturalistic decision making) approach might help us to understand the role of decision processes in negative outcomes. Our strategy was to examine a collection of identified decision errors through the lens of an aviation decision process model and to search for common patterns. The second, and more difficult, task was to determine what might account for those patterns. The corpus we analyzed consisted of tactical decision errors identified by the NTSB (National Transportation Safety Board) from a set of accidents in which crew behavior contributed to the accident. A common pattern emerged: about three quarters of the errors represented plan-continuation errors, that is, a decision to continue with the original plan despite cues that suggested changing the course of action. Features in the context that might contribute to these errors were identified: (a) ambiguous dynamic conditions and (b) organizational and socially-induced goal conflicts. We hypothesize that 'errors' are mediated by underestimation of risk and failure to analyze the potential consequences of continuing with the initial plan. Stressors may further contribute to these effects. Suggestions for improving performance in these error-inducing contexts are discussed.

  3. A Quantum Theoretical Explanation for Probability Judgment Errors

    ERIC Educational Resources Information Center

    Busemeyer, Jerome R.; Pothos, Emmanuel M.; Franco, Riccardo; Trueblood, Jennifer S.

    2011-01-01

    A quantum probability model is introduced and used to explain human probability judgment errors including the conjunction and disjunction fallacies, averaging effects, unpacking effects, and order effects on inference. On the one hand, quantum theory is similar to other categorization and memory models of cognition in that it relies on vector…

  4. Using SAS PROC CALIS to fit Level-1 error covariance structures of latent growth models.

    PubMed

    Ding, Cherng G; Jane, Ten-Der

    2012-09-01

    In the present article, we demonstrates the use of SAS PROC CALIS to fit various types of Level-1 error covariance structures of latent growth models (LGM). Advantages of the SEM approach, on which PROC CALIS is based, include the capabilities of modeling the change over time for latent constructs, measured by multiple indicators; embedding LGM into a larger latent variable model; incorporating measurement models for latent predictors; and better assessing model fit and the flexibility in specifying error covariance structures. The strength of PROC CALIS is always accompanied with technical coding work, which needs to be specifically addressed. We provide a tutorial on the SAS syntax for modeling the growth of a manifest variable and the growth of a latent construct, focusing the documentation on the specification of Level-1 error covariance structures. Illustrations are conducted with the data generated from two given latent growth models. The coding provided is helpful when the growth model has been well determined and the Level-1 error covariance structure is to be identified.

  5. Evidence Report: Risk of Performance Errors Due to Training Deficiencies

    NASA Technical Reports Server (NTRS)

    Barshi, Immanuel

    2012-01-01

    The Risk of Performance Errors Due to Training Deficiencies is identified by the National Aeronautics and Space Administration (NASA) Human Research Program (HRP) as a recognized risk to human health and performance in space. The HRP Program Requirements Document (PRD) defines these risks. This Evidence Report provides a summary of the evidence that has been used to identify and characterize this risk. Given that training content, timing, intervals, and delivery methods must support crew task performance, and given that training paradigms will be different for long-duration missions with increased crew autonomy, there is a risk that operators will lack the skills or knowledge necessary to complete critical tasks, resulting in flight and ground crew errors and inefficiencies, failed mission and program objectives, and an increase in crew injuries.

  6. Exploring Situational Awareness in Diagnostic Errors in Primary Care

    PubMed Central

    Singh, Hardeep; Giardina, Traber Davis; Petersen, Laura A.; Smith, Michael; Wilson, Lindsey; Dismukes, Key; Bhagwath, Gayathri; Thomas, Eric J.

    2013-01-01

    Objective Diagnostic errors in primary care are harmful but poorly studied. To facilitate understanding of diagnostic errors in real-world primary care settings using electronic health records (EHRs), this study explored the use of the Situational Awareness (SA) framework from aviation human factors research. Methods A mixed-methods study was conducted involving reviews of EHR data followed by semi-structured interviews of selected providers from two institutions in the US. The study population included 380 consecutive patients with colorectal and lung cancers diagnosed between February 2008 and January 2009. Using a pre-tested data collection instrument, trained physicians identified diagnostic errors, defined as lack of timely action on one or more established indications for diagnostic work-up for lung and colorectal cancers. Twenty-six providers involved in cases with and without errors were interviewed. Interviews probed for providers' lack of SA and how this may have influenced the diagnostic process. Results Of 254 cases meeting inclusion criteria, errors were found in 30 (32.6%) of 92 lung cancer cases and 56 (33.5%) of 167 colorectal cancer cases. Analysis of interviews related to error cases revealed evidence of lack of one of four levels of SA applicable to primary care practice: information perception, information comprehension, forecasting future events, and choosing appropriate action based on the first three levels. In cases without error, the application of the SA framework provided insight into processes involved in attention management. Conclusions A framework of SA can help analyze and understand diagnostic errors in primary care settings that use EHRs. PMID:21890757

  7. Decay of motor memories in the absence of error

    PubMed Central

    Vaswani, Pavan A.; Shadmehr, Reza

    2013-01-01

    When motor commands are accompanied by an unexpected outcome, the resulting error induces changes in subsequent commands. However, when errors are artificially eliminated, changes in motor commands are not sustained, but show decay. Why does the adaptation-induced change in motor output decay in the absence of error? A prominent idea is that decay reflects the stability of the memory. We show results that challenge this idea and instead suggest that motor output decays because the brain actively disengages a component of the memory. Humans adapted their reaching movements to a perturbation and were then introduced to a long period of trials in which errors were absent (error-clamp). We found that, in some subjects, motor output did not decay at the onset of the error-clamp block, but a few trials later. We manipulated the kinematics of movements in the error-clamp block and found that as movements became more similar to subjects’ natural movements in the perturbation block, the lag to decay onset became longer and eventually reached hundreds of trials. Furthermore, when there was decay in the motor output, the endpoint of decay was not zero, but a fraction of the motor memory that was last acquired. Therefore, adaptation to a perturbation installed two distinct kinds of memories: one that was disengaged when the brain detected a change in the task, and one that persisted despite it. Motor memories showed little decay in the absence of error if the brain was prevented from detecting a change in task conditions. PMID:23637163

  8. Human Subthalamic Nucleus in Movement Error Detection and Its Evaluation during Visuomotor Adaptation

    PubMed Central

    Zavala, Baltazar; Pogosyan, Alek; Ashkan, Keyoumars; Zrinzo, Ludvic; Foltynie, Thomas; Limousin, Patricia; Brown, Peter

    2014-01-01

    Monitoring and evaluating movement errors to guide subsequent movements is a critical feature of normal motor control. Previously, we showed that the postmovement increase in electroencephalographic (EEG) beta power over the sensorimotor cortex reflects neural processes that evaluate motor errors consistent with Bayesian inference (Tan et al., 2014). Whether such neural processes are limited to this cortical region or involve the basal ganglia is unclear. Here, we recorded EEG over the cortex and local field potential (LFP) activity in the subthalamic nucleus (STN) from electrodes implanted in patients with Parkinson's disease, while they moved a joystick-controlled cursor to visual targets displayed on a computer screen. After movement offsets, we found increased beta activity in both local STN LFP and sensorimotor cortical EEG and in the coupling between the two, which was affected by both error magnitude and its contextual saliency. The postmovement increase in the coupling between STN and cortex was dominated by information flow from sensorimotor cortex to STN. However, an information drive appeared from STN to sensorimotor cortex in the first phase of the adaptation, when a constant rotation was applied between joystick inputs and cursor outputs. The strength of the STN to cortex drive correlated with the degree of adaption achieved across subjects. These results suggest that oscillatory activity in the beta band may dynamically couple the sensorimotor cortex and basal ganglia after movements. In particular, beta activity driven from the STN to cortex indicates task-relevant movement errors, information that may be important in modifying subsequent motor responses. PMID:25505327

  9. Impact of an antiretroviral stewardship strategy on medication error rates.

    PubMed

    Shea, Katherine M; Hobbs, Athena Lv; Shumake, Jason D; Templet, Derek J; Padilla-Tolentino, Eimeira; Mondy, Kristin E

    2018-05-02

    The impact of an antiretroviral stewardship strategy on medication error rates was evaluated. This single-center, retrospective, comparative cohort study included patients at least 18 years of age infected with human immunodeficiency virus (HIV) who were receiving antiretrovirals and admitted to the hospital. A multicomponent approach was developed and implemented and included modifications to the order-entry and verification system, pharmacist education, and a pharmacist-led antiretroviral therapy checklist. Pharmacists performed prospective audits using the checklist at the time of order verification. To assess the impact of the intervention, a retrospective review was performed before and after implementation to assess antiretroviral errors. Totals of 208 and 24 errors were identified before and after the intervention, respectively, resulting in a significant reduction in the overall error rate ( p < 0.001). In the postintervention group, significantly lower medication error rates were found in both patient admissions containing at least 1 medication error ( p < 0.001) and those with 2 or more errors ( p < 0.001). Significant reductions were also identified in each error type, including incorrect/incomplete medication regimen, incorrect dosing regimen, incorrect renal dose adjustment, incorrect administration, and the presence of a major drug-drug interaction. A regression tree selected ritonavir as the only specific medication that best predicted more errors preintervention ( p < 0.001); however, no antiretrovirals reliably predicted errors postintervention. An antiretroviral stewardship strategy for hospitalized HIV patients including prospective audit by staff pharmacists through use of an antiretroviral medication therapy checklist at the time of order verification decreased error rates. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  10. Geometric error characterization and error budgets. [thematic mapper

    NASA Technical Reports Server (NTRS)

    Beyer, E.

    1982-01-01

    Procedures used in characterizing geometric error sources for a spaceborne imaging system are described using the LANDSAT D thematic mapper ground segment processing as the prototype. Software was tested through simulation and is undergoing tests with the operational hardware as part of the prelaunch system evaluation. Geometric accuracy specifications, geometric correction, and control point processing are discussed. Cross track and along track errors are tabulated for the thematic mapper, the spacecraft, and ground processing to show the temporal registration error budget in pixel (42.5 microrad) 90%.

  11. Challenges in leveraging existing human performance data for quantifying the IDHEAS HRA method

    DOE PAGES

    Liao, Huafei N.; Groth, Katrina; Stevens-Adams, Susan

    2015-07-29

    Our article documents an exploratory study for collecting and using human performance data to inform human error probability (HEP) estimates for a new human reliability analysis (HRA) method, the IntegrateD Human Event Analysis System (IDHEAS). The method was based on cognitive models and mechanisms underlying human behaviour and employs a framework of 14 crew failure modes (CFMs) to represent human failures typical for human performance in nuclear power plant (NPP) internal, at-power events [1]. A decision tree (DT) was constructed for each CFM to assess the probability of the CFM occurring in different contexts. Data needs for IDHEAS quantification aremore » discussed. Then, the data collection framework and process is described and how the collected data were used to inform HEP estimation is illustrated with two examples. Next, five major technical challenges are identified for leveraging human performance data for IDHEAS quantification. Furthermore, these challenges reflect the data needs specific to IDHEAS. More importantly, they also represent the general issues with current human performance data and can provide insight for a path forward to support HRA data collection, use, and exchange for HRA method development, implementation, and validation.« less

  12. A comparison of human cadaver and augmented reality simulator models for straight laparoscopic colorectal skills acquisition training.

    PubMed

    LeBlanc, Fabien; Champagne, Bradley J; Augestad, Knut M; Neary, Paul C; Senagore, Anthony J; Ellis, Clyde N; Delaney, Conor P

    2010-08-01

    The aim of this study was to compare the human cadaver model with an augmented reality simulator for straight laparoscopic colorectal skills acquisition. Thirty-five sigmoid colectomies were performed on a cadaver (n = 7) or an augmented reality simulator (n = 28) during a laparoscopic training course. Prior laparoscopic colorectal experience was assessed. Objective structured technical skills assessment forms were completed by trainers and trainees independently. Groups were compared according to technical skills and events scores and satisfaction with training model. Prior laparoscopic experience was similar in both groups. For trainers and trainees, technical skills scores were considerably better on the simulator than on the cadaver. For trainers, generic events score was also considerably better on the simulator than on the cadaver. The main generic event occurring on both models was errors in the use of retraction. The main specific event occurring on both models was bowel perforation. Global satisfaction was better for the cadaver than for the simulator model (p < 0.001). The human cadaver model was more difficult but better appreciated than the simulator for laparoscopic sigmoid colectomy training. Simulator training followed by cadaver training can appropriately integrate simulators into the learning curve and maintain the benefits of both training methodologies. Published by Elsevier Inc.

  13. Experimental determination of the navigation error of the 4-D navigation, guidance, and control systems on the NASA B-737 airplane

    NASA Technical Reports Server (NTRS)

    Knox, C. E.

    1978-01-01

    Navigation error data from these flights are presented in a format utilizing three independent axes - horizontal, vertical, and time. The navigation position estimate error term and the autopilot flight technical error term are combined to form the total navigation error in each axis. This method of error presentation allows comparisons to be made between other 2-, 3-, or 4-D navigation systems and allows experimental or theoretical determination of the navigation error terms. Position estimate error data are presented with the navigation system position estimate based on dual DME radio updates that are smoothed with inertial velocities, dual DME radio updates that are smoothed with true airspeed and magnetic heading, and inertial velocity updates only. The normal mode of navigation with dual DME updates that are smoothed with inertial velocities resulted in a mean error of 390 m with a standard deviation of 150 m in the horizontal axis; a mean error of 1.5 m low with a standard deviation of less than 11 m in the vertical axis; and a mean error as low as 252 m with a standard deviation of 123 m in the time axis.

  14. Relationships of Measurement Error and Prediction Error in Observed-Score Regression

    ERIC Educational Resources Information Center

    Moses, Tim

    2012-01-01

    The focus of this paper is assessing the impact of measurement errors on the prediction error of an observed-score regression. Measures are presented and described for decomposing the linear regression's prediction error variance into parts attributable to the true score variance and the error variances of the dependent variable and the predictor…

  15. Errors in otology.

    PubMed

    Kartush, J M

    1996-11-01

    Practicing medicine successfully requires that errors in diagnosis and treatment be minimized. Malpractice laws encourage litigators to ascribe all medical errors to incompetence and negligence. There are, however, many other causes of unintended outcomes. This article describes common causes of errors and suggests ways to minimize mistakes in otologic practice. Widespread dissemination of knowledge about common errors and their precursors can reduce the incidence of their occurrence. Consequently, laws should be passed to allow for a system of non-punitive, confidential reporting of errors and "near misses" that can be shared by physicians nationwide.

  16. AgRISTARS. Preliminary technical results review of FY81 experiments, volume 2: Fiscal year 1981/1982 "corn and soybeans pilot" experiment

    NASA Technical Reports Server (NTRS)

    1981-01-01

    The performance of the technology exhibited significant proportion estimation errors, specifically, high mean error in both corn and soybeans area estimation. The data systems, technical approaches, and data assessment of the pilot experiment were reviewed. Results of proportion estimations procedure performance evaluations, and sensitivity evaluations are presented. The role of the pilot experiment in foreign technology development is discussed.

  17. 45 CFR 60.6 - Reporting errors, omissions, and revisions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Section 60.6 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION NATIONAL PRACTITIONER DATA BANK FOR ADVERSE INFORMATION ON PHYSICIANS AND OTHER HEALTH CARE PRACTITIONERS Reporting of Information § 60.6 Reporting errors, omissions, and revisions. (a) Persons and entities are responsible for...

  18. Visuomotor adaptation needs a validation of prediction error by feedback error

    PubMed Central

    Gaveau, Valérie; Prablanc, Claude; Laurent, Damien; Rossetti, Yves; Priot, Anne-Emmanuelle

    2014-01-01

    The processes underlying short-term plasticity induced by visuomotor adaptation to a shifted visual field are still debated. Two main sources of error can induce motor adaptation: reaching feedback errors, which correspond to visually perceived discrepancies between hand and target positions, and errors between predicted and actual visual reafferences of the moving hand. These two sources of error are closely intertwined and difficult to disentangle, as both the target and the reaching limb are simultaneously visible. Accordingly, the goal of the present study was to clarify the relative contributions of these two types of errors during a pointing task under prism-displaced vision. In “terminal feedback error” condition, viewing of their hand by subjects was allowed only at movement end, simultaneously with viewing of the target. In “movement prediction error” condition, viewing of the hand was limited to movement duration, in the absence of any visual target, and error signals arose solely from comparisons between predicted and actual reafferences of the hand. In order to prevent intentional corrections of errors, a subthreshold, progressive stepwise increase in prism deviation was used, so that subjects remained unaware of the visual deviation applied in both conditions. An adaptive aftereffect was observed in the “terminal feedback error” condition only. As far as subjects remained unaware of the optical deviation and self-assigned pointing errors, prediction error alone was insufficient to induce adaptation. These results indicate a critical role of hand-to-target feedback error signals in visuomotor adaptation; consistent with recent neurophysiological findings, they suggest that a combination of feedback and prediction error signals is necessary for eliciting aftereffects. They also suggest that feedback error updates the prediction of reafferences when a visual perturbation is introduced gradually and cognitive factors are eliminated or strongly

  19. The Digital Humanities as a Humanities Project

    ERIC Educational Resources Information Center

    Svensson, Patrik

    2012-01-01

    This article argues that the digital humanities can be seen as a humanities project in a time of significant change in the academy. The background is a number of scholarly, educational and technical challenges, the multiple epistemic traditions linked to the digital humanities, the potential reach of the field across and outside the humanities,…

  20. Catching errors with patient-specific pretreatment machine log file analysis.

    PubMed

    Rangaraj, Dharanipathy; Zhu, Mingyao; Yang, Deshan; Palaniswaamy, Geethpriya; Yaddanapudi, Sridhar; Wooten, Omar H; Brame, Scott; Mutic, Sasa

    2013-01-01

    A robust, efficient, and reliable quality assurance (QA) process is highly desired for modern external beam radiation therapy treatments. Here, we report the results of a semiautomatic, pretreatment, patient-specific QA process based on dynamic machine log file analysis clinically implemented for intensity modulated radiation therapy (IMRT) treatments delivered by high energy linear accelerators (Varian 2100/2300 EX, Trilogy, iX-D, Varian Medical Systems Inc, Palo Alto, CA). The multileaf collimator machine (MLC) log files are called Dynalog by Varian. Using an in-house developed computer program called "Dynalog QA," we automatically compare the beam delivery parameters in the log files that are generated during pretreatment point dose verification measurements, with the treatment plan to determine any discrepancies in IMRT deliveries. Fluence maps are constructed and compared between the delivered and planned beams. Since clinical introduction in June 2009, 912 machine log file analyses QA were performed by the end of 2010. Among these, 14 errors causing dosimetric deviation were detected and required further investigation and intervention. These errors were the result of human operating mistakes, flawed treatment planning, and data modification during plan file transfer. Minor errors were also reported in 174 other log file analyses, some of which stemmed from false positives and unreliable results; the origins of these are discussed herein. It has been demonstrated that the machine log file analysis is a robust, efficient, and reliable QA process capable of detecting errors originating from human mistakes, flawed planning, and data transfer problems. The possibility of detecting these errors is low using point and planar dosimetric measurements. Copyright © 2013 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  1. Conclusions, Research Needs, and Recommendations of the Expert Panel: Technical Workshop on Human Milk Surveillance and Research for Environmental Chemicals in the U.S.

    EPA Science Inventory

    Proceedings of "The Technical Workshop on Human Milk Surveillance and Research on Environmental Chemicals in the United States" was organized to develop state-of-the-science protocols describing the various aspects of such a program. The 2-day workshop was held at the Mi...

  2. Assessing Nurse Anaesthetists' Non-Technical Skills in the operating room.

    PubMed

    Lyk-Jensen, H T; Jepsen, R M H G; Spanager, L; Dieckmann, P; Østergaard, D

    2014-08-01

    Incident reporting and fieldwork in operating rooms have shown that some of the errors that arise in anaesthesia relate to inadequate use of non-technical skills. To provide a tool for training and feedback on nurse anaesthetists' non-technical skills, this study aimed to adapt the Anaesthetists' Non-Technical Skills (ANTS) as a behavioural marker system for the formative assessment of nurse anaesthetists' non-technical skills in the operating room. A qualitative approach with focus group interviews was used to identify the non-technical skills of nurse anaesthetists in the operating room. The interview data were transcribed verbatim. Directed content analysis was used to code and sort data deductively into the ANTS categories: task management, team working, situation awareness and decision making. The prototype named Nurse Anaesthetists' Non-Technical Skills (N-ANTS) was presented and discussed in a group of subject matter experts to ensure face validity. The N-ANTS system consists of the same four categories as ANTS and 15 underlying elements. Three to five good and poor behavioural markers for each element were identified. The headings and definitions of the categories and elements were adjusted to encompass the behavioural markers in N-ANTS. The differences that emerged mainly reflected statements regarding the establishment of role, competence, and task delegation. A behavioural marker system, N-ANTS, for nurse anaesthetists was adapted from a behavioural marker system, ANTS, for anaesthesiologists. © 2014 The Acta Anaesthesiologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  3. The introduction of an acute physiological support service for surgical patients is an effective error reduction strategy.

    PubMed

    Clarke, D L; Kong, V Y; Naidoo, L C; Furlong, H; Aldous, C

    2013-01-01

    Acute surgical patients are particularly vulnerable to human error. The Acute Physiological Support Team (APST) was created with the twin objectives of identifying high-risk acute surgical patients in the general wards and reducing both the incidence of error and impact of error on these patients. A number of error taxonomies were used to understand the causes of human error and a simple risk stratification system was adopted to identify patients who are particularly at risk of error. During the period November 2012-January 2013 a total of 101 surgical patients were cared for by the APST at Edendale Hospital. The average age was forty years. There were 36 females and 65 males. There were 66 general surgical patients and 35 trauma patients. Fifty-six patients were referred on the day of their admission. The average length of stay in the APST was four days. Eleven patients were haemo-dynamically unstable on presentation and twelve were clinically septic. The reasons for referral were sepsis,(4) respiratory distress,(3) acute kidney injury AKI (38), post-operative monitoring (39), pancreatitis,(3) ICU down-referral,(7) hypoxia,(5) low GCS,(1) coagulopathy.(1) The mortality rate was 13%. A total of thirty-six patients experienced 56 errors. A total of 143 interventions were initiated by the APST. These included institution or adjustment of intravenous fluids (101), blood transfusion,(12) antibiotics,(9) the management of neutropenic sepsis,(1) central line insertion,(3) optimization of oxygen therapy,(7) correction of electrolyte abnormality,(8) correction of coagulopathy.(2) CONCLUSION: Our intervention combined current taxonomies of error with a simple risk stratification system and is a variant of the defence in depth strategy of error reduction. We effectively identified and corrected a significant number of human errors in high-risk acute surgical patients. This audit has helped understand the common sources of error in the general surgical wards and will inform

  4. Technical note: 3D from standard digital photography of human crania-a preliminary assessment.

    PubMed

    Katz, David; Friess, Martin

    2014-05-01

    This study assessed three-dimensional (3D) photogrammetry as a tool for capturing and quantifying human skull morphology. While virtual reconstruction with 3D surface scanning technology has become an accepted part of the paleoanthropologist's tool kit, recent advances in 3D photogrammetry make it a potential alternative to dedicated surface scanners. The principal advantages of photogrammetry are more rapid raw data collection, simplicity and portability of setup, and reduced equipment costs. We tested the precision and repeatability of 3D photogrammetry by comparing digital models of human crania reconstructed from conventional, 2D digital photographs to those generated using a 3D surface scanner. Overall, the photogrammetry and scanner meshes showed low degrees of deviation from one another. Surface area estimates derived from photogrammetry models tended to be slightly larger. Landmark configurations generally did not cluster together based upon whether the reconstruction was created with photogrammetry or surface scanning technology. Average deviations of landmark coordinates recorded on photogrammetry models were within the generally allowable range of error in osteometry. Thus, while dependent upon the needs of the particular research project, 3D photogrammetry appears to be a suitable, lower-cost alternative to 3D imaging and scanning options. Copyright © 2014 Wiley Periodicals, Inc.

  5. The ethics and practical importance of defining, distinguishing and disclosing nursing errors: a discussion paper.

    PubMed

    Johnstone, Megan-Jane; Kanitsaki, Olga

    2006-03-01

    Nurses globally are required and expected to report nursing errors. As is clearly demonstrated in the international literature, fulfilling this requirement is not, however, without risks. In this discussion paper, the notion of 'nursing error', the practical and moral importance of defining, distinguishing and disclosing nursing errors and how a distinct definition of 'nursing error' fits with the new 'system approach' to human-error management in health care are critiqued. Drawing on international literature and two key case exemplars from the USA and Australia, arguments are advanced to support the view that although it is 'right' for nurses to report nursing errors, it will be very difficult for them to do so unless a non-punitive approach to nursing-error management is adopted.

  6. Error propagation in eigenimage filtering.

    PubMed

    Soltanian-Zadeh, H; Windham, J P; Jenkins, J M

    1990-01-01

    Mathematical derivation of error (noise) propagation in eigenimage filtering is presented. Based on the mathematical expressions, a method for decreasing the propagated noise given a sequence of images is suggested. The signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) of the final composite image are compared to the SNRs and CNRs of the images in the sequence. The consistency of the assumptions and accuracy of the mathematical expressions are investigated using sequences of simulated and real magnetic resonance (MR) images of an agarose phantom and a human brain.

  7. Sun compass error model

    NASA Technical Reports Server (NTRS)

    Blucker, T. J.; Ferry, W. W.

    1971-01-01

    An error model is described for the Apollo 15 sun compass, a contingency navigational device. Field test data are presented along with significant results of the test. The errors reported include a random error resulting from tilt in leveling the sun compass, a random error because of observer sighting inaccuracies, a bias error because of mean tilt in compass leveling, a bias error in the sun compass itself, and a bias error because the device is leveled to the local terrain slope.

  8. Implementing parallel spreadsheet models for health policy decisions: The impact of unintentional errors on model projections

    PubMed Central

    Bailey, Stephanie L.; Bono, Rose S.; Nash, Denis; Kimmel, April D.

    2018-01-01

    Background Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. Methods We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. Results We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Conclusions

  9. Psychomotor performance measured in a virtual environment correlates with technical skills in the operating room.

    PubMed

    Kundhal, Pavi S; Grantcharov, Teodor P

    2009-03-01

    This study was conducted to validate the role of virtual reality computer simulation as an objective method for assessing laparoscopic technical skills. The authors aimed to investigate whether performance in the operating room, assessed using a modified Objective Structured Assessment of Technical Skill (OSATS), correlated with the performance parameters registered by a virtual reality laparoscopic trainer (LapSim). The study enrolled 10 surgical residents (3 females) with a median of 5.5 years (range, 2-6 years) since graduation who had similar limited experience in laparoscopic surgery (median, 5; range, 1-16 laparoscopic cholecystectomies). All the participants performed three repetitions of seven basic skills tasks on the LapSim laparoscopic trainer and one laparoscopic cholecystectomy in the operating room. The operating room procedure was video recorded and blindly assessed by two independent observers using a modified OSATS rating scale. Assessment in the operating room was based on three parameters: time used, error score, and economy of motion score. During the tasks on the LapSim, time, error (tissue damage and millimeters of tissue damage [tasks 2-6], error score [incomplete target areas, badly placed clips, and dropped clips [task 7]), and economy of movement parameters (path length and angular path) were registered. The correlation between time, economy, and error parameters during the simulated tasks and the operating room procedure was statistically assessed using Spearman's test. Significant correlations were demonstrated between the time used to complete the operating room procedure and time used for task 7 (r (s) = 0.74; p = 0.015). The error score demonstrated during the laparoscopic cholecystectomy correlated well with the tissue damage in three of the seven tasks (p < 0.05), the millimeters of tissue damage during two of the tasks, and the error score in task 7 (r (s) = 0.67; p = 0.034). Furthermore, statistically significant correlations were

  10. A technical framework to describe occupant behavior for building energy simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, William; Hong, Tianzhen

    2013-12-20

    Green buildings that fail to meet expected design performance criteria indicate that technology alone does not guarantee high performance. Human influences are quite often simplified and ignored in the design, construction, and operation of buildings. Energy-conscious human behavior has been demonstrated to be a significant positive factor for improving the indoor environment while reducing the energy use of buildings. In our study we developed a new technical framework to describe energy-related human behavior in buildings. The energy-related behavior includes accounting for individuals and groups of occupants and their interactions with building energy services systems, appliances and facilities. The technical frameworkmore » consists of four key components: i. the drivers behind energy-related occupant behavior, which are biological, societal, environmental, physical, and economical in nature ii. the needs of the occupants are based on satisfying criteria that are either physical (e.g. thermal, visual and acoustic comfort) or non-physical (e.g. entertainment, privacy, and social reward) iii. the actions that building occupants perform when their needs are not fulfilled iv. the systems with which an occupant can interact to satisfy their needs The technical framework aims to provide a standardized description of a complete set of human energy-related behaviors in the form of an XML schema. For each type of behavior (e.g., occupants opening/closing windows, switching on/off lights etc.) we identify a set of common behaviors based on a literature review, survey data, and our own field study and analysis. Stochastic models are adopted or developed for each type of behavior to enable the evaluation of the impact of human behavior on energy use in buildings, during either the design or operation phase. We will also demonstrate the use of the technical framework in assessing the impact of occupancy behavior on energy saving technologies. The technical framework

  11. Analysis of professional malpractice claims in implant dentistry in Italy from insurance company technical reports, 2006 to 2010.

    PubMed

    Pinchi, Vilma; Varvara, Giuseppe; Pradella, Francesco; Focardi, Martina; Donati, Michele D; Norelli, Gianaristide

    2014-01-01

    The aim of the study was to analyze the characteristics of implant dentistry claims in Italy based on insurance company technical reports for malpractice claims. One hundred twenty-one technical reports of cases of professional malpractice in implant dentistry between 2006 and 2010 were included in the study. Data included the sex and age of the patient and dentist, the kind of negligence claimed, and the damages awarded as a consequence of the alleged misconduct. Of the cases examined in this study, 9.9% went to court. The patients were female in 73.6% of the cases. Most of the technical errors were committed during implant insertion (82.6%). In 50.4% of cases, the technical error involved the surrounding structures, such as damage to the inferior alveolar nerve (32.2%) or the lingual nerve (2.5%), invasion of the maxillary sinus (9.1%), or pulpal dental necrosis in adjacent teeth (6.6%). Incomplete clinical documentation was apparent in 54.5% of cases. In 9.9% of cases, a civil suit had already been filed before a visit, and medicolegal advice from the insurance expert had been procured. The discrepancy between the total number of cases examined and those that went to court indicates that implant malpractice claims in Italy are most often settled out of court. The large number of intraoperative errors seen and the high proportion of injuries to surrounding structures suggest that implant dentists would benefit from further specific training. Also, clinical documentation vital to a defense against any claims relating to professional misconduct was incomplete or absent in more than half of the cases.

  12. De novo centriole formation in human cells is error-prone and does not require SAS-6 self-assembly.

    PubMed

    Wang, Won-Jing; Acehan, Devrim; Kao, Chien-Han; Jane, Wann-Neng; Uryu, Kunihiro; Tsou, Meng-Fu Bryan

    2015-11-26

    Vertebrate centrioles normally propagate through duplication, but in the absence of preexisting centrioles, de novo synthesis can occur. Consistently, centriole formation is thought to strictly rely on self-assembly, involving self-oligomerization of the centriolar protein SAS-6. Here, through reconstitution of de novo synthesis in human cells, we surprisingly found that normal looking centrioles capable of duplication and ciliation can arise in the absence of SAS-6 self-oligomerization. Moreover, whereas canonically duplicated centrioles always form correctly, de novo centrioles are prone to structural errors, even in the presence of SAS-6 self-oligomerization. These results indicate that centriole biogenesis does not strictly depend on SAS-6 self-assembly, and may require preexisting centrioles to ensure structural accuracy, fundamentally deviating from the current paradigm.

  13. Medical error and systems of signaling: conceptual and linguistic definition.

    PubMed

    Smorti, Andrea; Cappelli, Francesco; Zarantonello, Roberta; Tani, Franca; Gensini, Gian Franco

    2014-09-01

    In recent years the issue of patient safety has been the subject of detailed investigations, particularly as a result of the increasing attention from the patients and the public on the problem of medical error. The purpose of this work is firstly to define the classification of medical errors, which are distinguished between two perspectives: those that are personal, and those that are caused by the system. Furthermore we will briefly review some of the main methods used by healthcare organizations to identify and analyze errors. During this discussion it has been determined that, in order to constitute a practical, coordinated and shared action to counteract the error, it is necessary to promote an analysis that considers all elements (human, technological and organizational) that contribute to the occurrence of a critical event. Therefore, it is essential to create a culture of constructive confrontation that encourages an open and non-punitive debate about the causes that led to error. In conclusion we have thus underlined that in health it is essential to affirm a system discussion that considers the error as a learning source, and as a result of the interaction between the individual and the organization. In this way, one should encourage a non-guilt bearing discussion on evident errors and on those which are not immediately identifiable, in order to create the conditions that recognize and corrects the error even before it produces negative consequences.

  14. Insar Unwrapping Error Correction Based on Quasi-Accurate Detection of Gross Errors (quad)

    NASA Astrophysics Data System (ADS)

    Kang, Y.; Zhao, C. Y.; Zhang, Q.; Yang, C. S.

    2018-04-01

    Unwrapping error is a common error in the InSAR processing, which will seriously degrade the accuracy of the monitoring results. Based on a gross error correction method, Quasi-accurate detection (QUAD), the method for unwrapping errors automatic correction is established in this paper. This method identifies and corrects the unwrapping errors by establishing a functional model between the true errors and interferograms. The basic principle and processing steps are presented. Then this method is compared with the L1-norm method with simulated data. Results show that both methods can effectively suppress the unwrapping error when the ratio of the unwrapping errors is low, and the two methods can complement each other when the ratio of the unwrapping errors is relatively high. At last the real SAR data is tested for the phase unwrapping error correction. Results show that this new method can correct the phase unwrapping errors successfully in the practical application.

  15. 77 FR 1889 - Drivers of CMVs: Restricting the Use of Cellular Phones; Technical Amendment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-12

    ... DEPARTMENT OF TRANSPORTATION Federal Motor Carrier Safety Administration 49 CFR Part 391 [Docket No. FMCSA-2010-0096] RIN 2126-AB29 Drivers of CMVs: Restricting the Use of Cellular Phones; Technical... Cellular Phones final rule (76 FR 75470) had a clerical error in Sec. 391.15(f)(1) that stated ``paragraph...

  16. 45 CFR 60.6 - Reporting errors, omissions, and revisions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false Reporting errors, omissions, and revisions. 60.6 Section 60.6 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION NATIONAL PRACTITIONER DATA BANK FOR ADVERSE INFORMATION ON PHYSICIANS AND OTHER HEALTH CARE PRACTITIONERS Reporting of...

  17. Technical Issues in Evolving to Integrated Services Digital Network (ISDN)

    DTIC Science & Technology

    1991-06-01

    channel through some Page 13 Technical Issues in Evolving to ISDN Final Report separate interface (such as the AT command set of the Hayes modems or the...errors experienced over standard modem provided connectivity. But, in this project connectivity has been established only over a single CO. Those...examined to some extent and are discussed below. Existing equipment was of two types: that which treats ISDN as just another leased line providing 56k or

  18. Threat and error management for anesthesiologists: a predictive risk taxonomy

    PubMed Central

    Ruskin, Keith J.; Stiegler, Marjorie P.; Park, Kellie; Guffey, Patrick; Kurup, Viji; Chidester, Thomas

    2015-01-01

    Purpose of review Patient care in the operating room is a dynamic interaction that requires cooperation among team members and reliance upon sophisticated technology. Most human factors research in medicine has been focused on analyzing errors and implementing system-wide changes to prevent them from recurring. We describe a set of techniques that has been used successfully by the aviation industry to analyze errors and adverse events and explain how these techniques can be applied to patient care. Recent findings Threat and error management (TEM) describes adverse events in terms of risks or challenges that are present in an operational environment (threats) and the actions of specific personnel that potentiate or exacerbate those threats (errors). TEM is a technique widely used in aviation, and can be adapted for the use in a medical setting to predict high-risk situations and prevent errors in the perioperative period. A threat taxonomy is a novel way of classifying and predicting the hazards that can occur in the operating room. TEM can be used to identify error-producing situations, analyze adverse events, and design training scenarios. Summary TEM offers a multifaceted strategy for identifying hazards, reducing errors, and training physicians. A threat taxonomy may improve analysis of critical events with subsequent development of specific interventions, and may also serve as a framework for training programs in risk mitigation. PMID:24113268

  19. Training situational awareness to reduce surgical errors in the operating room.

    PubMed

    Graafland, M; Schraagen, J M C; Boermeester, M A; Bemelman, W A; Schijven, M P

    2015-01-01

    Surgical errors result from faulty decision-making, misperceptions and the application of suboptimal problem-solving strategies, just as often as they result from technical failure. To date, surgical training curricula have focused mainly on the acquisition of technical skills. The aim of this review was to assess the validity of methods for improving situational awareness in the surgical theatre. A search was conducted in PubMed, Embase, the Cochrane Library and PsycINFO using predefined inclusion criteria, up to June 2014. All study types were considered eligible. The primary endpoint was validity for improving situational awareness in the surgical theatre at individual or team level. Nine articles were considered eligible. These evaluated surgical team crisis training in simulated environments for minimally invasive surgery (4) and open surgery (3), and training courses focused at training non-technical skills (2). Two studies showed that simulation-based surgical team crisis training has construct validity for assessing situational awareness in surgical trainees in minimally invasive surgery. None of the studies showed effectiveness of surgical crisis training on situational awareness in open surgery, whereas one showed face validity of a 2-day non-technical skills training course. To improve safety in the operating theatre, more attention to situational awareness is needed in surgical training. Few structured curricula have been developed and validation research remains limited. Strategies to improve situational awareness can be adopted from other industries. © 2014 BJS Society Ltd. Published by John Wiley & Sons Ltd.

  20. 76 FR 4549 - Testing of Certain High Production Volume Chemicals; Second Group of Chemicals; Technical Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-26

    ... Testing of Certain High Production Volume Chemicals; Second Group of Chemicals; Technical Correction... production volume (HPV) chemical substances to obtain screening level data for health and environmental effects and chemical fate. This document is being issued to correct a typographical error concerning the...

  1. Evaluation of Investments in Science, Technology and Innovation: Applying Scientific and Technical Human Capital Framework for Assessment of Doctoral Students in Cooperative Research Centers

    ERIC Educational Resources Information Center

    Leonchuk, Olena

    2016-01-01

    This dissertation builds on an alternative framework for evaluation of science, technology and innovation (STI) outcomes--the scientific & technical (S&T) human capital which was developed by Bozeman, Dietz and Gaughan (2001). At its core, this framework looks beyond simple economic and publication metrics and instead focuses on…

  2. Effects of Human Factors in Engineering and Design for Teaching Mathematics: A Comparison Study of Online and Face-to-Face at a Technical College

    ERIC Educational Resources Information Center

    Mativo, John M.; Hill, Roger B.; Godfrey, Paul W.

    2013-01-01

    The focus of this study was to examine four characteristics for successful and unsuccessful students enrolled in basic mathematics courses at a technical college. The characteristics, considered to be in part effects of human factors in engineering and design, examined the preferred learning styles, computer information systems competency,…

  3. 45 CFR 164.312 - Technical safeguards.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 1 2014-10-01 2014-10-01 false Technical safeguards. 164.312 Section 164.312 Public Welfare Department of Health and Human Services ADMINISTRATIVE DATA STANDARDS AND RELATED REQUIREMENTS SECURITY AND PRIVACY Security Standards for the Protection of Electronic Protected Health...

  4. Acquisition, representation, and transfer of models of visuo-motor error

    PubMed Central

    Zhang, Hang; Kulsa, Mila Kirstie C.; Maloney, Laurence T.

    2015-01-01

    We examined how human subjects acquire and represent models of visuo-motor error and how they transfer information about visuo-motor error from one task to a closely related one. The experiment consisted of three phases. In the training phase, subjects threw beanbags underhand towards targets displayed on a wall-mounted touch screen. The distribution of their endpoints was a vertically elongated bivariate Gaussian. In the subsequent choice phase, subjects repeatedly chose which of two targets varying in shape and size they would prefer to attempt to hit. Their choices allowed us to investigate their internal models of visuo-motor error distribution, including the coordinate system in which they represented visuo-motor error. In the transfer phase, subjects repeated the choice phase from a different vantage point, the same distance from the screen but with the throwing direction shifted 45°. From the new vantage point, visuo-motor error was effectively expanded horizontally by . We found that subjects incorrectly assumed an isotropic distribution in the choice phase but that the anisotropy they assumed in the transfer phase agreed with an objectively correct transfer. We also found that the coordinate system used in coding two-dimensional visuo-motor error in the choice phase was effectively one-dimensional. PMID:26057549

  5. Local-search based prediction of medical image registration error

    NASA Astrophysics Data System (ADS)

    Saygili, Görkem

    2018-03-01

    Medical image registration is a crucial task in many different medical imaging applications. Hence, considerable amount of work has been published recently that aim to predict the error in a registration without any human effort. If provided, these error predictions can be used as a feedback to the registration algorithm to further improve its performance. Recent methods generally start with extracting image-based and deformation-based features, then apply feature pooling and finally train a Random Forest (RF) regressor to predict the real registration error. Image-based features can be calculated after applying a single registration but provide limited accuracy whereas deformation-based features such as variation of deformation vector field may require up to 20 registrations which is a considerably high time-consuming task. This paper proposes to use extracted features from a local search algorithm as image-based features to estimate the error of a registration. The proposed method comprises a local search algorithm to find corresponding voxels between registered image pairs and based on the amount of shifts and stereo confidence measures, it predicts the amount of registration error in millimetres densely using a RF regressor. Compared to other algorithms in the literature, the proposed algorithm does not require multiple registrations, can be efficiently implemented on a Graphical Processing Unit (GPU) and can still provide highly accurate error predictions in existence of large registration error. Experimental results with real registrations on a public dataset indicate a substantially high accuracy achieved by using features from the local search algorithm.

  6. Beyond human error taxonomies in assessment of risk in sociotechnical systems: a new paradigm with the EAST 'broken-links' approach.

    PubMed

    Stanton, Neville A; Harvey, Catherine

    2017-02-01

    Risk assessments in Sociotechnical Systems (STS) tend to be based on error taxonomies, yet the term 'human error' does not sit easily with STS theories and concepts. A new break-link approach was proposed as an alternative risk assessment paradigm to reveal the effect of information communication failures between agents and tasks on the entire STS. A case study of the training of a Royal Navy crew detecting a low flying Hawk (simulating a sea-skimming missile) is presented using EAST to model the Hawk-Frigate STS in terms of social, information and task networks. By breaking 19 social links and 12 task links, 137 potential risks were identified. Discoveries included revealing the effect of risk moving around the system; reducing the risks to the Hawk increased the risks to the Frigate. Future research should examine the effects of compounded information communication failures on STS performance. Practitioner Summary: The paper presents a step-by-step walk-through of EAST to show how it can be used for risk assessment in sociotechnical systems. The 'broken-links' method takes a systemic, rather than taxonomic, approach to identify information communication failures in social and task networks.

  7. Error Tracking System

    EPA Pesticide Factsheets

    Error Tracking System is a database used to store & track error notifications sent by users of EPA's web site. ETS is managed by OIC/OEI. OECA's ECHO & OEI Envirofacts use it. Error notifications from EPA's home Page under Contact Us also uses it.

  8. Investigation of technology needs for avoiding helicopter pilot error related accidents

    NASA Technical Reports Server (NTRS)

    Chais, R. I.; Simpson, W. E.

    1985-01-01

    Pilot error which is cited as a cause or related factor in most rotorcraft accidents was examined. Pilot error related accidents in helicopters to identify areas in which new technology could reduce or eliminate the underlying causes of these human errors were investigated. The aircraft accident data base at the U.S. Army Safety Center was studied as the source of data on helicopter accidents. A randomly selected sample of 110 aircraft records were analyzed on a case-by-case basis to assess the nature of problems which need to be resolved and applicable technology implications. Six technology areas in which there appears to be a need for new or increased emphasis are identified.

  9. Refractive Errors

    MedlinePlus

    ... and lens of your eye helps you focus. Refractive errors are vision problems that happen when the shape ... cornea, or aging of the lens. Four common refractive errors are Myopia, or nearsightedness - clear vision close up ...

  10. Error detection method

    DOEpatents

    Olson, Eric J.

    2013-06-11

    An apparatus, program product, and method that run an algorithm on a hardware based processor, generate a hardware error as a result of running the algorithm, generate an algorithm output for the algorithm, compare the algorithm output to another output for the algorithm, and detect the hardware error from the comparison. The algorithm is designed to cause the hardware based processor to heat to a degree that increases the likelihood of hardware errors to manifest, and the hardware error is observable in the algorithm output. As such, electronic components may be sufficiently heated and/or sufficiently stressed to create better conditions for generating hardware errors, and the output of the algorithm may be compared at the end of the run to detect a hardware error that occurred anywhere during the run that may otherwise not be detected by traditional methodologies (e.g., due to cooling, insufficient heat and/or stress, etc.).

  11. Aircraft system modeling error and control error

    NASA Technical Reports Server (NTRS)

    Kulkarni, Nilesh V. (Inventor); Kaneshige, John T. (Inventor); Krishnakumar, Kalmanje S. (Inventor); Burken, John J. (Inventor)

    2012-01-01

    A method for modeling error-driven adaptive control of an aircraft. Normal aircraft plant dynamics is modeled, using an original plant description in which a controller responds to a tracking error e(k) to drive the component to a normal reference value according to an asymptote curve. Where the system senses that (1) at least one aircraft plant component is experiencing an excursion and (2) the return of this component value toward its reference value is not proceeding according to the expected controller characteristics, neural network (NN) modeling of aircraft plant operation may be changed. However, if (1) is satisfied but the error component is returning toward its reference value according to expected controller characteristics, the NN will continue to model operation of the aircraft plant according to an original description.

  12. 42 CFR 407.32 - Prejudice to enrollment rights because of Federal Government misrepresentation, inaction, or error.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Government misrepresentation, inaction, or error. 407.32 Section 407.32 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES MEDICARE PROGRAM SUPPLEMENTARY MEDICAL INSURANCE... enrollment rights because of Federal Government misrepresentation, inaction, or error. If an individual's...

  13. 42 CFR 407.32 - Prejudice to enrollment rights because of Federal Government misrepresentation, inaction, or error.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Government misrepresentation, inaction, or error. 407.32 Section 407.32 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES MEDICARE PROGRAM SUPPLEMENTARY MEDICAL INSURANCE... enrollment rights because of Federal Government misrepresentation, inaction, or error. If an individual's...

  14. Seeing the Errors You Feel Enhances Locomotor Performance but Not Learning.

    PubMed

    Roemmich, Ryan T; Long, Andrew W; Bastian, Amy J

    2016-10-24

    In human motor learning, it is thought that the more information we have about our errors, the faster we learn. Here, we show that additional error information can lead to improved motor performance without any concomitant improvement in learning. We studied split-belt treadmill walking that drives people to learn a new gait pattern using sensory prediction errors detected by proprioceptive feedback. When we also provided visual error feedback, participants acquired the new walking pattern far more rapidly and showed accelerated restoration of the normal walking pattern during washout. However, when the visual error feedback was removed during either learning or washout, errors reappeared with performance immediately returning to the level expected based on proprioceptive learning alone. These findings support a model with two mechanisms: a dual-rate adaptation process that learns invariantly from sensory prediction error detected by proprioception and a visual-feedback-dependent process that monitors learning and corrects residual errors but shows no learning itself. We show that our voluntary correction model accurately predicted behavior in multiple situations where visual feedback was used to change acquisition of new walking patterns while the underlying learning was unaffected. The computational and behavioral framework proposed here suggests that parallel learning and error correction systems allow us to rapidly satisfy task demands without necessarily committing to learning, as the relative permanence of learning may be inappropriate or inefficient when facing environments that are liable to change. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Effects of Listening Conditions, Error Types, and Ensemble Textures on Error Detection Skills

    ERIC Educational Resources Information Center

    Waggoner, Dori T.

    2011-01-01

    This study was designed with three main purposes: (a) to investigate the effects of two listening conditions on error detection accuracy, (b) to compare error detection responses for rhythm errors and pitch errors, and (c) to examine the influences of texture on error detection accuracy. Undergraduate music education students (N = 18) listened to…

  16. Solutions to decrease a systematic error related to AAPH addition in the fluorescence-based ORAC assay.

    PubMed

    Mellado-Ortega, Elena; Zabalgogeazcoa, Iñigo; Vázquez de Aldana, Beatriz R; Arellano, Juan B

    2017-02-15

    Oxygen radical absorbance capacity (ORAC) assay in 96-well multi-detection plate readers is a rapid method to determine total antioxidant capacity (TAC) in biological samples. A disadvantage of this method is that the antioxidant inhibition reaction does not start in all of the 96 wells at the same time due to technical limitations when dispensing the free radical-generating azo initiator 2,2'-azobis (2-methyl-propanimidamide) dihydrochloride (AAPH). The time delay between wells yields a systematic error that causes statistically significant differences in TAC determination of antioxidant solutions depending on their plate position. We propose two alternative solutions to avoid this AAPH-dependent error in ORAC assays. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Fabricating CAD/CAM Implant-Retained Mandibular Bar Overdentures: A Clinical and Technical Overview.

    PubMed

    Goo, Chui Ling; Tan, Keson Beng Choon

    2017-01-01

    This report describes the clinical and technical aspects in the oral rehabilitation of an edentulous patient with knife-edge ridge at the mandibular anterior edentulous region, using implant-retained overdentures. The application of computer-aided design and computer-aided manufacturing (CAD/CAM) in the fabrication of the overdenture framework simplifies the laboratory process of the implant prostheses. The Nobel Procera CAD/CAM System was utilised to produce a lightweight titanium overdenture bar with locator attachments. It is proposed that the digital workflow of CAD/CAM milled implant overdenture bar allows us to avoid numerous technical steps and possibility of casting errors involved in the conventional casting of such bars.

  18. Software error detection

    NASA Technical Reports Server (NTRS)

    Buechler, W.; Tucker, A. G.

    1981-01-01

    Several methods were employed to detect both the occurrence and source of errors in the operational software of the AN/SLQ-32. A large embedded real time electronic warfare command and control system for the ROLM 1606 computer are presented. The ROLM computer provides information about invalid addressing, improper use of privileged instructions, stack overflows, and unimplemented instructions. Additionally, software techniques were developed to detect invalid jumps, indices out of range, infinte loops, stack underflows, and field size errors. Finally, data are saved to provide information about the status of the system when an error is detected. This information includes I/O buffers, interrupt counts, stack contents, and recently passed locations. The various errors detected, techniques to assist in debugging problems, and segment simulation on a nontarget computer are discussed. These error detection techniques were a major factor in the success of finding the primary cause of error in 98% of over 500 system dumps.

  19. Role of medical, technical, and administrative leadership in the human resource management life cycle: a team approach to laboratory management.

    PubMed

    Wilkinson, D S; Dilts, T J

    1999-01-01

    We believe the team approach to laboratory management achieves the best outcomes. Laboratory management requires the integration of medical, technical, and administrative expertise to achieve optimal service, quality, and cost performance. Usually, a management team of two or more individuals must be assembled to achieve all of these critical leadership functions. The individual members of the management team must possess the requisite expertise in clinical medicine, laboratory science, technology management, and administration. They also must work together in a unified and collaborative manner, regardless of where individual team members appear on the organizational chart. The management team members share in executing the entire human resource management life cycle, creating the proper environment to maximize human performance. Above all, the management team provides visionary and credible leadership.

  20. Relation between minimum-error discrimination and optimum unambiguous discrimination

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qiu Daowen; SQIG-Instituto de Telecomunicacoes, Departamento de Matematica, Instituto Superior Tecnico, Universidade Tecnica de Lisboa, Avenida Rovisco Pais PT-1049-001, Lisbon; Li Lvjun

    2010-09-15

    In this paper, we investigate the relationship between the minimum-error probability Q{sub E} of ambiguous discrimination and the optimal inconclusive probability Q{sub U} of unambiguous discrimination. It is known that for discriminating two states, the inequality Q{sub U{>=}}2Q{sub E} has been proved in the literature. The main technical results are as follows: (1) We show that, for discriminating more than two states, Q{sub U{>=}}2Q{sub E} may not hold again, but the infimum of Q{sub U}/Q{sub E} is 1, and there is no supremum of Q{sub U}/Q{sub E}, which implies that the failure probabilities of the two schemes for discriminating somemore » states may be narrowly or widely gapped. (2) We derive two concrete formulas of the minimum-error probability Q{sub E} and the optimal inconclusive probability Q{sub U}, respectively, for ambiguous discrimination and unambiguous discrimination among arbitrary m simultaneously diagonalizable mixed quantum states with given prior probabilities. In addition, we show that Q{sub E} and Q{sub U} satisfy the relationship that Q{sub U{>=}}(m/m-1)Q{sub E}.« less

  1. A bottom-up model of spatial attention predicts human error patterns in rapid scene recognition.

    PubMed

    Einhäuser, Wolfgang; Mundhenk, T Nathan; Baldi, Pierre; Koch, Christof; Itti, Laurent

    2007-07-20

    Humans demonstrate a peculiar ability to detect complex targets in rapidly presented natural scenes. Recent studies suggest that (nearly) no focal attention is required for overall performance in such tasks. Little is known, however, of how detection performance varies from trial to trial and which stages in the processing hierarchy limit performance: bottom-up visual processing (attentional selection and/or recognition) or top-down factors (e.g., decision-making, memory, or alertness fluctuations)? To investigate the relative contribution of these factors, eight human observers performed an animal detection task in natural scenes presented at 20 Hz. Trial-by-trial performance was highly consistent across observers, far exceeding the prediction of independent errors. This consistency demonstrates that performance is not primarily limited by idiosyncratic factors but by visual processing. Two statistical stimulus properties, contrast variation in the target image and the information-theoretical measure of "surprise" in adjacent images, predict performance on a trial-by-trial basis. These measures are tightly related to spatial attention, demonstrating that spatial attention and rapid target detection share common mechanisms. To isolate the causal contribution of the surprise measure, eight additional observers performed the animal detection task in sequences that were reordered versions of those all subjects had correctly recognized in the first experiment. Reordering increased surprise before and/or after the target while keeping the target and distractors themselves unchanged. Surprise enhancement impaired target detection in all observers. Consequently, and contrary to several previously published findings, our results demonstrate that attentional limitations, rather than target recognition alone, affect the detection of targets in rapidly presented visual sequences.

  2. Residents' numeric inputting error in computerized physician order entry prescription.

    PubMed

    Wu, Xue; Wu, Changxu; Zhang, Kan; Wei, Dong

    2016-04-01

    Computerized physician order entry (CPOE) system with embedded clinical decision support (CDS) can significantly reduce certain types of prescription error. However, prescription errors still occur. Various factors such as the numeric inputting methods in human computer interaction (HCI) produce different error rates and types, but has received relatively little attention. This study aimed to examine the effects of numeric inputting methods and urgency levels on numeric inputting errors of prescription, as well as categorize the types of errors. Thirty residents participated in four prescribing tasks in which two factors were manipulated: numeric inputting methods (numeric row in the main keyboard vs. numeric keypad) and urgency levels (urgent situation vs. non-urgent situation). Multiple aspects of participants' prescribing behavior were measured in sober prescribing situations. The results revealed that in urgent situations, participants were prone to make mistakes when using the numeric row in the main keyboard. With control of performance in the sober prescribing situation, the effects of the input methods disappeared, and urgency was found to play a significant role in the generalized linear model. Most errors were either omission or substitution types, but the proportion of transposition and intrusion error types were significantly higher than that of the previous research. Among numbers 3, 8, and 9, which were the less common digits used in prescription, the error rate was higher, which was a great risk to patient safety. Urgency played a more important role in CPOE numeric typing error-making than typing skills and typing habits. It was recommended that inputting with the numeric keypad had lower error rates in urgent situation. An alternative design could consider increasing the sensitivity of the keys with lower frequency of occurrence and decimals. To improve the usability of CPOE, numeric keyboard design and error detection could benefit from spatial

  3. Measurement Error and Equating Error in Power Analysis

    ERIC Educational Resources Information Center

    Phillips, Gary W.; Jiang, Tao

    2016-01-01

    Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…

  4. Paediatric in-patient prescribing errors in Malaysia: a cross-sectional multicentre study.

    PubMed

    Khoo, Teik Beng; Tan, Jing Wen; Ng, Hoong Phak; Choo, Chong Ming; Bt Abdul Shukor, Intan Nor Chahaya; Teh, Siao Hean

    2017-06-01

    Background There is a lack of large comprehensive studies in developing countries on paediatric in-patient prescribing errors in different settings. Objectives To determine the characteristics of in-patient prescribing errors among paediatric patients. Setting General paediatric wards, neonatal intensive care units and paediatric intensive care units in government hospitals in Malaysia. Methods This is a cross-sectional multicentre study involving 17 participating hospitals. Drug charts were reviewed in each ward to identify the prescribing errors. All prescribing errors identified were further assessed for their potential clinical consequences, likely causes and contributing factors. Main outcome measures Incidence, types, potential clinical consequences, causes and contributing factors of the prescribing errors. Results The overall prescribing error rate was 9.2% out of 17,889 prescribed medications. There was no significant difference in the prescribing error rates between different types of hospitals or wards. The use of electronic prescribing had a higher prescribing error rate than manual prescribing (16.9 vs 8.2%, p < 0.05). Twenty eight (1.7%) prescribing errors were deemed to have serious potential clinical consequences and 2 (0.1%) were judged to be potentially fatal. Most of the errors were attributed to human factors, i.e. performance or knowledge deficit. The most common contributing factors were due to lack of supervision or of knowledge. Conclusions Although electronic prescribing may potentially improve safety, it may conversely cause prescribing errors due to suboptimal interfaces and cumbersome work processes. Junior doctors need specific training in paediatric prescribing and close supervision to reduce prescribing errors in paediatric in-patients.

  5. Geographically correlated orbit error

    NASA Technical Reports Server (NTRS)

    Rosborough, G. W.

    1989-01-01

    The dominant error source in estimating the orbital position of a satellite from ground based tracking data is the modeling of the Earth's gravity field. The resulting orbit error due to gravity field model errors are predominantly long wavelength in nature. This results in an orbit error signature that is strongly correlated over distances on the size of ocean basins. Anderle and Hoskin (1977) have shown that the orbit error along a given ground track also is correlated to some degree with the orbit error along adjacent ground tracks. This cross track correlation is verified here and is found to be significant out to nearly 1000 kilometers in the case of TOPEX/POSEIDON when using the GEM-T1 gravity model. Finally, it was determined that even the orbit error at points where ascending and descending ground traces cross is somewhat correlated. The implication of these various correlations is that the orbit error due to gravity error is geographically correlated. Such correlations have direct implications when using altimetry to recover oceanographic signals.

  6. Cognitive control of conscious error awareness: error awareness and error positivity (Pe) amplitude in moderate-to-severe traumatic brain injury (TBI)

    PubMed Central

    Logan, Dustin M.; Hill, Kyle R.; Larson, Michael J.

    2015-01-01

    Poor awareness has been linked to worse recovery and rehabilitation outcomes following moderate-to-severe traumatic brain injury (M/S TBI). The error positivity (Pe) component of the event-related potential (ERP) is linked to error awareness and cognitive control. Participants included 37 neurologically healthy controls and 24 individuals with M/S TBI who completed a brief neuropsychological battery and the error awareness task (EAT), a modified Stroop go/no-go task that elicits aware and unaware errors. Analyses compared between-group no-go accuracy (including accuracy between the first and second halves of the task to measure attention and fatigue), error awareness performance, and Pe amplitude by level of awareness. The M/S TBI group decreased in accuracy and maintained error awareness over time; control participants improved both accuracy and error awareness during the course of the task. Pe amplitude was larger for aware than unaware errors for both groups; however, consistent with previous research on the Pe and TBI, there were no significant between-group differences for Pe amplitudes. Findings suggest possible attention difficulties and low improvement of performance over time may influence specific aspects of error awareness in M/S TBI. PMID:26217212

  7. Suffering in Silence: Medical Error and its Impact on Health Care Providers.

    PubMed

    Robertson, Jennifer J; Long, Brit

    2018-04-01

    All humans are fallible. Because physicians are human, unintentional errors unfortunately occur. While unintentional medical errors have an impact on patients and their families, they may also contribute to adverse mental and emotional effects on the involved provider(s). These may include burnout, lack of concentration, poor work performance, posttraumatic stress disorder, depression, and even suicidality. The objectives of this article are to 1) discuss the impact medical error has on involved provider(s), 2) provide potential reasons why medical error can have a negative impact on provider mental health, and 3) suggest solutions for providers and health care organizations to recognize and mitigate the adverse effects medical error has on providers. Physicians and other providers may feel a variety of adverse emotions after medical error, including guilt, shame, anxiety, fear, and depression. It is thought that the pervasive culture of perfectionism and individual blame in medicine plays a considerable role toward these negative effects. In addition, studies have found that despite physicians' desire for support after medical error, many physicians feel a lack of personal and administrative support. This may further contribute to poor emotional well-being. Potential solutions in the literature are proposed, including provider counseling, learning from mistakes without fear of punishment, discussing mistakes with others, focusing on the system versus the individual, and emphasizing provider wellness. Much of the reviewed literature is limited in terms of an emergency medicine focus or even regarding physicians in general. In addition, most studies are survey- or interview-based, which limits objectivity. While additional, more objective research is needed in terms of mitigating the effects of error on physicians, this review may help provide insight and support for those who feel alone in their attempt to heal after being involved in an adverse medical event

  8. Medication Errors: New EU Good Practice Guide on Risk Minimisation and Error Prevention.

    PubMed

    Goedecke, Thomas; Ord, Kathryn; Newbould, Victoria; Brosch, Sabine; Arlett, Peter

    2016-06-01

    A medication error is an unintended failure in the drug treatment process that leads to, or has the potential to lead to, harm to the patient. Reducing the risk of medication errors is a shared responsibility between patients, healthcare professionals, regulators and the pharmaceutical industry at all levels of healthcare delivery. In 2015, the EU regulatory network released a two-part good practice guide on medication errors to support both the pharmaceutical industry and regulators in the implementation of the changes introduced with the EU pharmacovigilance legislation. These changes included a modification of the 'adverse reaction' definition to include events associated with medication errors, and the requirement for national competent authorities responsible for pharmacovigilance in EU Member States to collaborate and exchange information on medication errors resulting in harm with national patient safety organisations. To facilitate reporting and learning from medication errors, a clear distinction has been made in the guidance between medication errors resulting in adverse reactions, medication errors without harm, intercepted medication errors and potential errors. This distinction is supported by an enhanced MedDRA(®) terminology that allows for coding all stages of the medication use process where the error occurred in addition to any clinical consequences. To better understand the causes and contributing factors, individual case safety reports involving an error should be followed-up with the primary reporter to gather information relevant for the conduct of root cause analysis where this may be appropriate. Such reports should also be summarised in periodic safety update reports and addressed in risk management plans. Any risk minimisation and prevention strategy for medication errors should consider all stages of a medicinal product's life-cycle, particularly the main sources and types of medication errors during product development. This article

  9. Monte Carlo errors with less errors

    NASA Astrophysics Data System (ADS)

    Wolff, Ulli; Alpha Collaboration

    2004-01-01

    We explain in detail how to estimate mean values and assess statistical errors for arbitrary functions of elementary observables in Monte Carlo simulations. The method is to estimate and sum the relevant autocorrelation functions, which is argued to produce more certain error estimates than binning techniques and hence to help toward a better exploitation of expensive simulations. An effective integrated autocorrelation time is computed which is suitable to benchmark efficiencies of simulation algorithms with regard to specific observables of interest. A Matlab code is offered for download that implements the method. It can also combine independent runs (replica) allowing to judge their consistency.

  10. Comparison of Efficiency of Jackknife and Variance Component Estimators of Standard Errors. Program Statistics Research. Technical Report.

    ERIC Educational Resources Information Center

    Longford, Nicholas T.

    Large scale surveys usually employ a complex sampling design and as a consequence, no standard methods for estimation of the standard errors associated with the estimates of population means are available. Resampling methods, such as jackknife or bootstrap, are often used, with reference to their properties of robustness and reduction of bias. A…

  11. Technical vision for robots

    NASA Astrophysics Data System (ADS)

    1985-01-01

    A new invention by scientists who have copied the structure of a human eye will help replace a human telescope-watching astronomer with a robot. It will be possible to provide technical vision not only for robot astronomers but also for their industrial fellow robots. So far, an artificial eye with dimensions close to those of a human eye discerns only black-and-white images. But already the second model of the eye is to perceive colors as well. Polymers which are suited for the role of the coat of an eye, lens, and vitreous body were applied. The retina has been replaced with a bundle of the finest glass filaments through which light rays get onto photomultipliers. They can be positioned outside the artificial eye. The main thing is to prevent great losses in the light guide.

  12. The Roots of Technical Learning and Thinking: Situating TLT in Schools

    ERIC Educational Resources Information Center

    Hansen, Ron

    2008-01-01

    Technical thinking is defined as an aptitude, ingenuity, and affliction for solving practical problems through experience. From the beginning of civilization such thinking has been a significant part of human existence. Learning associated with it is a natural instinct for most people, young and old, who work in a technical field, pursue a…

  13. Additional Final Area Designations and Technical Amendment for the 2012 Annual Fine Particle Standard Established in 2012 - Mar 2015

    EPA Pesticide Factsheets

    EPA is establishing or revising initial area designations and a technical amendment to correct an inadvertent error in the initial designation for one area for the 2012 annual national ambient air quality standards for fine particle pollution.

  14. Culture in great apes: using intricate complexity in feeding skills to trace the evolutionary origin of human technical prowess.

    PubMed

    Byrne, Richard W

    2007-04-29

    Geographical cataloguing of traits, as used in human ethnography, has led to the description of 'culture' in some non-human great apes. Culture, in these terms, is detected as a pattern of local ignorance resulting from environmental constraints on knowledge transmission. However, in many cases, the geographical variations may alternatively be explained by ecology. Social transmission of information can reliably be identified in many other animal species, by experiment or distinctive patterns in distribution; but the excitement of detecting culture in great apes derives from the possibility of understanding the evolution of cumulative technological culture in humans. Given this interest, I argue that great ape research should concentrate on technically complex behaviour patterns that are ubiquitous within a local population; in these cases, a wholly non-social ontogeny is highly unlikely. From this perspective, cultural transmission has an important role in the elaborate feeding skills of all species of great ape, in conveying the 'gist' or organization of skills. In contrast, social learning is unlikely to be responsible for local stylistic differences, which are apt to reflect sensitive adaptations to ecology.

  15. How glitter relates to gold: similarity-dependent reward prediction errors in the human striatum.

    PubMed

    Kahnt, Thorsten; Park, Soyoung Q; Burke, Christopher J; Tobler, Philippe N

    2012-11-14

    Optimal choices benefit from previous learning. However, it is not clear how previously learned stimuli influence behavior to novel but similar stimuli. One possibility is to generalize based on the similarity between learned and current stimuli. Here, we use neuroscientific methods and a novel computational model to inform the question of how stimulus generalization is implemented in the human brain. Behavioral responses during an intradimensional discrimination task showed similarity-dependent generalization. Moreover, a peak shift occurred, i.e., the peak of the behavioral generalization gradient was displaced from the rewarded conditioned stimulus in the direction away from the unrewarded conditioned stimulus. To account for the behavioral responses, we designed a similarity-based reinforcement learning model wherein prediction errors generalize across similar stimuli and update their value. We show that this model predicts a similarity-dependent neural generalization gradient in the striatum as well as changes in responding during extinction. Moreover, across subjects, the width of generalization was negatively correlated with functional connectivity between the striatum and the hippocampus. This result suggests that hippocampus-striatal connections contribute to stimulus-specific value updating by controlling the width of generalization. In summary, our results shed light onto the neurobiology of a fundamental, similarity-dependent learning principle that allows learning the value of stimuli that have never been encountered.

  16. "Magnetic Reconnection Code: Applications to Sawtooth Oscillations, Error-Field Induced Islands, and the Dynamo Effect" - Final Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fitzpatrick, Richard

    2007-09-24

    Dr. Fitzpatrick has written an MHD code in order to investigate the interaction of tearing modes with flow and external magnetic perturbations, which has been successfully benchmarked against both linear and nonlinear theory and used to investigate error-field penetration in flowing plasmas. The same code was used to investigate the so-called Taylor problem. He employed the University of Chicago's FLASH code to further investigate the Taylor problem, discovering a new aspect of the problem. Dr. Fitzpatrick has written a 2-D Hall MHD code and used it to investigate the collisionless Taylor problem. Dr. Waelbroeck has performed an investigation of themore » scaling of the error-field penetration threshold in collisionless plasmas. Paul Watson and Dr. Fitzpatrick have written a fully-implicit extended-MHD code using the PETSC framework. Five publications have resulted from this grant work.« less

  17. Mechanism of Error-Free Bypass of the Environmental Carcinogen N-(2'-Deoxyguanosin-8-yl)-3-aminobenzanthrone Adduct by Human DNA Polymerase η.

    PubMed

    Patra, Amritraj; Politica, Dustin A; Chatterjee, Arindom; Tokarsky, E John; Suo, Zucai; Basu, Ashis K; Stone, Michael P; Egli, Martin

    2016-11-03

    The environmental pollutant 3-nitrobenzanthrone produces bulky aminobenzanthrone (ABA) DNA adducts with both guanine and adenine nucleobases. A major product occurs at the C8 position of guanine (C8-dG-ABA). These adducts present a strong block to replicative polymerases but, remarkably, can be bypassed in a largely error-free manner by the human Y-family polymerase η (hPol η). Here, we report the crystal structure of a ternary Pol⋅DNA⋅dCTP complex between a C8-dG-ABA-containing template:primer duplex and hPol η. The complex was captured at the insertion stage and provides crucial insight into the mechanism of error-free bypass of this bulky lesion. Specifically, bypass involves accommodation of the ABA moiety inside a hydrophobic cleft to the side of the enzyme active site and formation of an intra-nucleotide hydrogen bond between the phosphate and ABA amino moiety, allowing the adducted guanine to form a standard Watson-Crick pair with the incoming dCTP. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Multiple point mutations in a shuttle vector propagated in human cells: evidence for an error-prone DNA polymerase activity.

    PubMed

    Seidman, M M; Bredberg, A; Seetharam, S; Kraemer, K H

    1987-07-01

    Mutagenesis was studied at the DNA-sequence level in human fibroblast and lymphoid cells by use of a shuttle vector plasmid, pZ189, containing a suppressor tRNA marker gene. In a series of experiments, 62 plasmids were recovered that had two to six base substitutions in the 160-base-pair marker gene. Approximately 20-30% of the mutant plasmids that were recovered after passing ultraviolet-treated pZ189 through a repair-proficient human fibroblast line contained these multiple mutations. In contrast, passage of ultraviolet-treated pZ189 through an excision-repair-deficient (xeroderma pigmentosum) line yielded only 2% multiple base substitution mutants. Introducing a single-strand nick in otherwise unmodified pZ189 adjacent to the marker, followed by passage through the xeroderma pigmentosum cells, resulted in about 66% multiple base substitution mutants. The multiple mutations were found in a 160-base-pair region containing the marker gene but were rarely found in an adjacent 170-base-pair region. Passing ultraviolet-treated or nicked pZ189 through a repair-proficient human B-cell line also yielded multiple base substitution mutations in 20-33% of the mutant plasmids. An explanation for these multiple mutations is that they were generated by an error-prone polymerase while filling gaps. These mutations share many of the properties displayed by mutations in the immunoglobulin hypervariable regions.

  19. To err is human nature. Can transfusion errors due to human factors ever be eliminated?

    PubMed

    Lau, F Y; Cheng, G

    2001-11-01

    Fatal hemolytic transfusion reaction due to ABO incompatibility occurs mainly as a result of clerical errors. Blood sample drawn from the wrong patient and labeled as another patient's specimen will not be detected by the blood bank unless there is a previous ABO grouping result. In Hong Kong, we had designed a transfusion wristband system--portable barcode scanner system to detect such clerical errors. The system was well accepted by the house staff and had prevented two BO mismatched transfusion. Other current system of patient's identification may have similar results, but the wristband system has the advantages of being simple, inexpensive and easy to implement. The Hong Kong Government is planning to replace the personal identity card for all citizens with an electronic smart card by 2003. If the new card contains the person's detailed red cell phenotypes in digital code, then the phenotypes of all blood donors and admitted patients will be readily available. It is feasible to issue phenotype-matched blood to patients without any need of pre-transfusion testing, therefore eliminating mismatched transfusions for most patients. Our pilot study of 474 patients showed that the system was safe and up to 98% of admitted patients could be transfused without delays. Patients with rare phenotypes, visitors or illegal immigrants may still need pre-transfusion antibody screen, but if most patients can be issued blood units without testings, the potential savings in health care amount to US$14 million/year.

  20. DNA assembly with error correction on a droplet digital microfluidics platform.

    PubMed

    Khilko, Yuliya; Weyman, Philip D; Glass, John I; Adams, Mark D; McNeil, Melanie A; Griffin, Peter B

    2018-06-01

    Custom synthesized DNA is in high demand for synthetic biology applications. However, current technologies to produce these sequences using assembly from DNA oligonucleotides are costly and labor-intensive. The automation and reduced sample volumes afforded by microfluidic technologies could significantly decrease materials and labor costs associated with DNA synthesis. The purpose of this study was to develop a gene assembly protocol utilizing a digital microfluidic device. Toward this goal, we adapted bench-scale oligonucleotide assembly methods followed by enzymatic error correction to the Mondrian™ digital microfluidic platform. We optimized Gibson assembly, polymerase chain reaction (PCR), and enzymatic error correction reactions in a single protocol to assemble 12 oligonucleotides into a 339-bp double- stranded DNA sequence encoding part of the human influenza virus hemagglutinin (HA) gene. The reactions were scaled down to 0.6-1.2 μL. Initial microfluidic assembly methods were successful and had an error frequency of approximately 4 errors/kb with errors originating from the original oligonucleotide synthesis. Relative to conventional benchtop procedures, PCR optimization required additional amounts of MgCl 2 , Phusion polymerase, and PEG 8000 to achieve amplification of the assembly and error correction products. After one round of error correction, error frequency was reduced to an average of 1.8 errors kb - 1 . We demonstrated that DNA assembly from oligonucleotides and error correction could be completely automated on a digital microfluidic (DMF) platform. The results demonstrate that enzymatic reactions in droplets show a strong dependence on surface interactions, and successful on-chip implementation required supplementation with surfactants, molecular crowding agents, and an excess of enzyme. Enzymatic error correction of assembled fragments improved sequence fidelity by 2-fold, which was a significant improvement but somewhat lower than