Applying lessons learned to enhance human performance and reduce human error for ISS operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, W.R.
1999-01-01
A major component of reliability, safety, and mission success for space missions is ensuring that the humans involved (flight crew, ground crew, mission control, etc.) perform their tasks and functions as required. This includes compliance with training and procedures during normal conditions, and successful compensation when malfunctions or unexpected conditions occur. A very significant issue that affects human performance in space flight is human error. Human errors can invalidate carefully designed equipment and procedures. If certain errors combine with equipment failures or design flaws, mission failure or loss of life can occur. The control of human error during operation ofmore » the International Space Station (ISS) will be critical to the overall success of the program. As experience from Mir operations has shown, human performance plays a vital role in the success or failure of long duration space missions. The Department of Energy{close_quote}s Idaho National Engineering and Environmental Laboratory (INEEL) is developing a systematic approach to enhance human performance and reduce human errors for ISS operations. This approach is based on the systematic identification and evaluation of lessons learned from past space missions such as Mir to enhance the design and operation of ISS. This paper will describe previous INEEL research on human error sponsored by NASA and how it can be applied to enhance human reliability for ISS. {copyright} {ital 1999 American Institute of Physics.}« less
Applying lessons learned to enhance human performance and reduce human error for ISS operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, W.R.
1998-09-01
A major component of reliability, safety, and mission success for space missions is ensuring that the humans involved (flight crew, ground crew, mission control, etc.) perform their tasks and functions as required. This includes compliance with training and procedures during normal conditions, and successful compensation when malfunctions or unexpected conditions occur. A very significant issue that affects human performance in space flight is human error. Human errors can invalidate carefully designed equipment and procedures. If certain errors combine with equipment failures or design flaws, mission failure or loss of life can occur. The control of human error during operation ofmore » the International Space Station (ISS) will be critical to the overall success of the program. As experience from Mir operations has shown, human performance plays a vital role in the success or failure of long duration space missions. The Department of Energy`s Idaho National Engineering and Environmental Laboratory (INEEL) is developed a systematic approach to enhance human performance and reduce human errors for ISS operations. This approach is based on the systematic identification and evaluation of lessons learned from past space missions such as Mir to enhance the design and operation of ISS. This paper describes previous INEEL research on human error sponsored by NASA and how it can be applied to enhance human reliability for ISS.« less
Understanding human management of automation errors
McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.
2013-01-01
Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042
Understanding human management of automation errors.
McBride, Sara E; Rogers, Wendy A; Fisk, Arthur D
2014-01-01
Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance.
The contributions of human factors on human error in Malaysia aviation maintenance industries
NASA Astrophysics Data System (ADS)
Padil, H.; Said, M. N.; Azizan, A.
2018-05-01
Aviation maintenance is a multitasking activity in which individuals perform varied tasks under constant pressure to meet deadlines as well as challenging work conditions. These situational characteristics combined with human factors can lead to various types of human related errors. The primary objective of this research is to develop a structural relationship model that incorporates human factors, organizational factors, and their impact on human errors in aviation maintenance. Towards that end, a questionnaire was developed which was administered to Malaysian aviation maintenance professionals. Structural Equation Modelling (SEM) approach was used in this study utilizing AMOS software. Results showed that there were a significant relationship of human factors on human errors and were tested in the model. Human factors had a partial effect on organizational factors while organizational factors had a direct and positive impact on human errors. It was also revealed that organizational factors contributed to human errors when coupled with human factors construct. This study has contributed to the advancement of knowledge on human factors effecting safety and has provided guidelines for improving human factors performance relating to aviation maintenance activities and could be used as a reference for improving safety performance in the Malaysian aviation maintenance companies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Callan, J.R.; Kelly, R.T.; Quinn, M.L.
1995-05-01
Remote Afterloading Brachytherapy (RAB) is a medical process used in the treatment of cancer. RAB uses a computer-controlled device to remotely insert and remove radioactive sources close to a target (or tumor) in the body. Some RAB problems affecting the radiation dose to the patient have been reported and attributed to human error. To determine the root cause of human error in the RAB system, a human factors team visited 23 RAB treatment sites in the US The team observed RAB treatment planning and delivery, interviewed RAB personnel, and performed walk-throughs, during which staff demonstrated the procedures and practices usedmore » in performing RAB tasks. Factors leading to human error in the RAB system were identified. The impact of those factors on the performance of RAB was then evaluated and prioritized in terms of safety significance. Finally, the project identified and evaluated alternative approaches for resolving the safety significant problems related to human error.« less
Prediction of human errors by maladaptive changes in event-related brain networks.
Eichele, Tom; Debener, Stefan; Calhoun, Vince D; Specht, Karsten; Engel, Andreas K; Hugdahl, Kenneth; von Cramon, D Yves; Ullsperger, Markus
2008-04-22
Humans engaged in monotonous tasks are susceptible to occasional errors that may lead to serious consequences, but little is known about brain activity patterns preceding errors. Using functional MRI and applying independent component analysis followed by deconvolution of hemodynamic responses, we studied error preceding brain activity on a trial-by-trial basis. We found a set of brain regions in which the temporal evolution of activation predicted performance errors. These maladaptive brain activity changes started to evolve approximately 30 sec before the error. In particular, a coincident decrease of deactivation in default mode regions of the brain, together with a decline of activation in regions associated with maintaining task effort, raised the probability of future errors. Our findings provide insights into the brain network dynamics preceding human performance errors and suggest that monitoring of the identified precursor states may help in avoiding human errors in critical real-world situations.
Prediction of human errors by maladaptive changes in event-related brain networks
Eichele, Tom; Debener, Stefan; Calhoun, Vince D.; Specht, Karsten; Engel, Andreas K.; Hugdahl, Kenneth; von Cramon, D. Yves; Ullsperger, Markus
2008-01-01
Humans engaged in monotonous tasks are susceptible to occasional errors that may lead to serious consequences, but little is known about brain activity patterns preceding errors. Using functional MRI and applying independent component analysis followed by deconvolution of hemodynamic responses, we studied error preceding brain activity on a trial-by-trial basis. We found a set of brain regions in which the temporal evolution of activation predicted performance errors. These maladaptive brain activity changes started to evolve ≈30 sec before the error. In particular, a coincident decrease of deactivation in default mode regions of the brain, together with a decline of activation in regions associated with maintaining task effort, raised the probability of future errors. Our findings provide insights into the brain network dynamics preceding human performance errors and suggest that monitoring of the identified precursor states may help in avoiding human errors in critical real-world situations. PMID:18427123
NASA Technical Reports Server (NTRS)
DeMott, Diana
2013-01-01
Compared to equipment designed to perform the same function over and over, humans are just not as reliable. Computers and machines perform the same action in the same way repeatedly getting the same result, unless equipment fails or a human interferes. Humans who are supposed to perform the same actions repeatedly often perform them incorrectly due to a variety of issues including: stress, fatigue, illness, lack of training, distraction, acting at the wrong time, not acting when they should, not following procedures, misinterpreting information or inattention to detail. Why not use robots and automatic controls exclusively if human error is so common? In an emergency or off normal situation that the computer, robotic element, or automatic control system is not designed to respond to, the result is failure unless a human can intervene. The human in the loop may be more likely to cause an error, but is also more likely to catch the error and correct it. When it comes to unexpected situations, or performing multiple tasks outside the defined mission parameters, humans are the only viable alternative. Human Reliability Assessments (HRA) identifies ways to improve human performance and reliability and can lead to improvements in systems designed to interact with humans. Understanding the context of the situation that can lead to human errors, which include taking the wrong action, no action or making bad decisions provides additional information to mitigate risks. With improved human reliability comes reduced risk for the overall operation or project.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lon N. Haney; David I. Gertman
2003-04-01
Beginning in the 1980s a primary focus of human reliability analysis was estimation of human error probabilities. However, detailed qualitative modeling with comprehensive representation of contextual variables often was lacking. This was likely due to the lack of comprehensive error and performance shaping factor taxonomies, and the limited data available on observed error rates and their relationship to specific contextual variables. In the mid 90s Boeing, America West Airlines, NASA Ames Research Center and INEEL partnered in a NASA sponsored Advanced Concepts grant to: assess the state of the art in human error analysis, identify future needs for human errormore » analysis, and develop an approach addressing these needs. Identified needs included the need for a method to identify and prioritize task and contextual characteristics affecting human reliability. Other needs identified included developing comprehensive taxonomies to support detailed qualitative modeling and to structure meaningful data collection efforts across domains. A result was the development of the FRamework Assessing Notorious Contributing Influences for Error (FRANCIE) with a taxonomy for airline maintenance tasks. The assignment of performance shaping factors to generic errors by experts proved to be valuable to qualitative modeling. Performance shaping factors and error types from such detailed approaches can be used to structure error reporting schemes. In a recent NASA Advanced Human Support Technology grant FRANCIE was refined, and two new taxonomies for use on space missions were developed. The development, sharing, and use of error taxonomies, and the refinement of approaches for increased fidelity of qualitative modeling is offered as a means to help direct useful data collection strategies.« less
Hughes, Charmayne M L; Baber, Chris; Bienkiewicz, Marta; Worthington, Andrew; Hazell, Alexa; Hermsdörfer, Joachim
2015-01-01
Approximately 33% of stroke patients have difficulty performing activities of daily living, often committing errors during the planning and execution of such activities. The objective of this study was to evaluate the ability of the human error identification (HEI) technique SHERPA (Systematic Human Error Reduction and Prediction Approach) to predict errors during the performance of daily activities in stroke patients with left and right hemisphere lesions. Using SHERPA we successfully predicted 36 of the 38 observed errors, with analysis indicating that the proportion of predicted and observed errors was similar for all sub-tasks and severity levels. HEI results were used to develop compensatory cognitive strategies that clinicians could employ to reduce or prevent errors from occurring. This study provides evidence for the reliability and validity of SHERPA in the design of cognitive rehabilitation strategies in stroke populations.
Cheng, Ching-Min; Hwang, Sheue-Ling
2015-03-01
This paper outlines the human error identification (HEI) techniques that currently exist to assess latent human errors. Many formal error identification techniques have existed for years, but few have been validated to cover latent human error analysis in different domains. This study considers many possible error modes and influential factors, including external error modes, internal error modes, psychological error mechanisms, and performance shaping factors, and integrates several execution procedures and frameworks of HEI techniques. The case study in this research was the operational process of changing chemical cylinders in a factory. In addition, the integrated HEI method was used to assess the operational processes and the system's reliability. It was concluded that the integrated method is a valuable aid to develop much safer operational processes and can be used to predict human error rates on critical tasks in the plant. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.
ERIC Educational Resources Information Center
Bowe, Melissa; Sellers, Tyra P.
2018-01-01
The Performance Diagnostic Checklist-Human Services (PDC-HS) has been used to assess variables contributing to undesirable staff performance. In this study, three preschool teachers completed the PDC-HS to identify the factors contributing to four paraprofessionals' inaccurate implementation of error-correction procedures during discrete trial…
Development and implementation of a human accuracy program in patient foodservice.
Eden, S H; Wood, S M; Ptak, K M
1987-04-01
For many years, industry has utilized the concept of human error rates to monitor and minimize human errors in the production process. A consistent quality-controlled product increases consumer satisfaction and repeat purchase of product. Administrative dietitians have applied the concepts of using human error rates (the number of errors divided by the number of opportunities for error) at four hospitals, with a total bed capacity of 788, within a tertiary-care medical center. Human error rate was used to monitor and evaluate trayline employee performance and to evaluate layout and tasks of trayline stations, in addition to evaluating employees in patient service areas. Long-term employees initially opposed the error rate system with some hostility and resentment, while newer employees accepted the system. All employees now believe that the constant feedback given by supervisors enhances their self-esteem and productivity. Employee error rates are monitored daily and are used to counsel employees when necessary; they are also utilized during annual performance evaluation. Average daily error rates for a facility staffed by new employees decreased from 7% to an acceptable 3%. In a facility staffed by long-term employees, the error rate increased, reflecting improper error documentation. Patient satisfaction surveys reveal satisfaction, for tray accuracy increased from 88% to 92% in the facility staffed by long-term employees and has remained above the 90% standard in the facility staffed by new employees.
Safety and Performance Analysis of the Non-Radar Oceanic/Remote Airspace In-Trail Procedure
NASA Technical Reports Server (NTRS)
Carreno, Victor A.; Munoz, Cesar A.
2007-01-01
This document presents a safety and performance analysis of the nominal case for the In-Trail Procedure (ITP) in a non-radar oceanic/remote airspace. The analysis estimates the risk of collision between the aircraft performing the ITP and a reference aircraft. The risk of collision is only estimated for the ITP maneuver and it is based on nominal operating conditions. The analysis does not consider human error, communication error conditions, or the normal risk of flight present in current operations. The hazards associated with human error and communication errors are evaluated in an Operational Hazards Analysis presented elsewhere.
The Importance of HRA in Human Space Flight: Understanding the Risks
NASA Technical Reports Server (NTRS)
Hamlin, Teri
2010-01-01
Human performance is critical to crew safety during space missions. Humans interact with hardware and software during ground processing, normal flight, and in response to events. Human interactions with hardware and software can cause Loss of Crew and/or Vehicle (LOCV) through improper actions, or may prevent LOCV through recovery and control actions. Humans have the ability to deal with complex situations and system interactions beyond the capability of machines. Human Reliability Analysis (HRA) is a method used to qualitatively and quantitatively assess the occurrence of human failures that affect availability and reliability of complex systems. Modeling human actions with their corresponding failure probabilities in a Probabilistic Risk Assessment (PRA) provides a more complete picture of system risks and risk contributions. A high-quality HRA can provide valuable information on potential areas for improvement, including training, procedures, human interfaces design, and the need for automation. Modeling human error has always been a challenge in part because performance data is not always readily available. For spaceflight, the challenge is amplified not only because of the small number of participants and limited amount of performance data available, but also due to the lack of definition of the unique factors influencing human performance in space. These factors, called performance shaping factors in HRA terminology, are used in HRA techniques to modify basic human error probabilities in order to capture the context of an analyzed task. Many of the human error modeling techniques were developed within the context of nuclear power plants and therefore the methodologies do not address spaceflight factors such as the effects of microgravity and longer duration missions. This presentation will describe the types of human error risks which have shown up as risk drivers in the Shuttle PRA which may be applicable to commercial space flight. As with other large PRAs of complex machines, human error in the Shuttle PRA proved to be an important contributor (12 percent) to LOCV. An existing HRA technique was adapted for use in the Shuttle PRA, but additional guidance and improvements are needed to make the HRA task in space-related PRAs easier and more accurate. Therefore, this presentation will also outline plans for expanding current HRA methodology to more explicitly cover spaceflight performance shaping factors.
NASA Technical Reports Server (NTRS)
Landon, Lauren Blackwell; Vessey, William B.; Barrett, Jamie D.
2015-01-01
A team is defined as: "two or more individuals who interact socially and adaptively, have shared or common goals, and hold meaningful task interdependences; it is hierarchically structured and has a limited life span; in it expertise and roles are distributed; and it is embedded within an organization/environmental context that influences and is influenced by ongoing processes and performance outcomes" (Salas, Stagl, Burke, & Goodwin, 2007, p. 189). From the NASA perspective, a team is commonly understood to be a collection of individuals that is assigned to support and achieve a particular mission. Thus, depending on context, this definition can encompass both the spaceflight crew and the individuals and teams in the larger multi-team system who are assigned to support that crew during a mission. The Team Risk outcomes of interest are predominantly performance related, with a secondary emphasis on long-term health; this is somewhat unique in the NASA HRP in that most Risk areas are medically related and primarily focused on long-term health consequences. In many operational environments (e.g., aviation), performance is assessed as the avoidance of errors. However, the research on performance errors is ambiguous. It implies that actions may be dichotomized into "correct" or "incorrect" responses, where incorrect responses or errors are always undesirable. Researchers have argued that this dichotomy is a harmful oversimplification, and it would be more productive to focus on the variability of human performance and how organizations can manage that variability (Hollnagel, Woods, & Leveson, 2006) (Category III1). Two problems occur when focusing on performance errors: 1) the errors are infrequent and, therefore, difficult to observe and record; and 2) the errors do not directly correspond to failure. Research reveals that humans are fairly adept at correcting or compensating for performance errors before such errors result in recognizable or recordable failures. Astronauts are notably adept high performers. Most failures are recorded only when multiple, small errors occur and humans are unable to recognize and correct or compensate for these errors in time to prevent a failure (Dismukes, Berman, Loukopoulos, 2007) (Category III). More commonly, observers record variability in levels of performance. Some teams commit no observable errors but fail to achieve performance objectives or perform only adequately, while other teams commit some errors but perform spectacularly. Successful performance, therefore, cannot be viewed as simply the absence of errors or the avoidance of failure Johnson Space Center (JSC) Joint Leadership Team, 2008). While failure is commonly attributed to making a major error, focusing solely on the elimination of error(s) does not significantly reduce the risk of failure. Failure may also occur when performance is simply insufficient or an effort is incapable of adjusting sufficiently to a contextual change (e.g., changing levels of autonomy).
Human factors process failure modes and effects analysis (HF PFMEA) software tool
NASA Technical Reports Server (NTRS)
Chandler, Faith T. (Inventor); Relvini, Kristine M. (Inventor); Shedd, Nathaneal P. (Inventor); Valentino, William D. (Inventor); Philippart, Monica F. (Inventor); Bessette, Colette I. (Inventor)
2011-01-01
Methods, computer-readable media, and systems for automatically performing Human Factors Process Failure Modes and Effects Analysis for a process are provided. At least one task involved in a process is identified, where the task includes at least one human activity. The human activity is described using at least one verb. A human error potentially resulting from the human activity is automatically identified, the human error is related to the verb used in describing the task. A likelihood of occurrence, detection, and correction of the human error is identified. The severity of the effect of the human error is identified. The likelihood of occurrence, and the severity of the risk of potential harm is identified. The risk of potential harm is compared with a risk threshold to identify the appropriateness of corrective measures.
Performance Monitoring Applied to System Supervision
Somon, Bertille; Campagne, Aurélie; Delorme, Arnaud; Berberian, Bruno
2017-01-01
Nowadays, automation is present in every aspect of our daily life and has some benefits. Nonetheless, empirical data suggest that traditional automation has many negative performance and safety consequences as it changed task performers into task supervisors. In this context, we propose to use recent insights into the anatomical and neurophysiological substrates of action monitoring in humans, to help further characterize performance monitoring during system supervision. Error monitoring is critical for humans to learn from the consequences of their actions. A wide variety of studies have shown that the error monitoring system is involved not only in our own errors, but also in the errors of others. We hypothesize that the neurobiological correlates of the self-performance monitoring activity can be applied to system supervision. At a larger scale, a better understanding of system supervision may allow its negative effects to be anticipated or even countered. This review is divided into three main parts. First, we assess the neurophysiological correlates of self-performance monitoring and their characteristics during error execution. Then, we extend these results to include performance monitoring and error observation of others or of systems. Finally, we provide further directions in the study of system supervision and assess the limits preventing us from studying a well-known phenomenon: the Out-Of-the-Loop (OOL) performance problem. PMID:28744209
Technical approaches for measurement of human errors
NASA Technical Reports Server (NTRS)
Clement, W. F.; Heffley, R. K.; Jewell, W. F.; Mcruer, D. T.
1980-01-01
Human error is a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents. The technical details of a variety of proven approaches for the measurement of human errors in the context of the national airspace system are presented. Unobtrusive measurements suitable for cockpit operations and procedures in part of full mission simulation are emphasized. Procedure, system performance, and human operator centered measurements are discussed as they apply to the manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations.
The SACADA database for human reliability and human performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Y. James Chang; Dennis Bley; Lawrence Criscione
2014-05-01
Lack of appropriate and sufficient human performance data has been identified as a key factor affecting human reliability analysis (HRA) quality especially in the estimation of human error probability (HEP). The Scenario Authoring, Characterization, and Debriefing Application (SACADA) database was developed by the U.S. Nuclear Regulatory Commission (NRC) to address this data need. An agreement between NRC and the South Texas Project Nuclear Operating Company (STPNOC) was established to support the SACADA development with aims to make the SACADA tool suitable for implementation in the nuclear power plants' operator training program to collect operator performance information. The collected data wouldmore » support the STPNOC's operator training program and be shared with the NRC for improving HRA quality. This paper discusses the SACADA data taxonomy, the theoretical foundation, the prospective data to be generated from the SACADA raw data to inform human reliability and human performance, and the considerations on the use of simulator data for HRA. Each SACADA data point consists of two information segments: context and performance results. Context is a characterization of the performance challenges to task success. The performance results are the results of performing the task. The data taxonomy uses a macrocognitive functions model for the framework. At a high level, information is classified according to the macrocognitive functions of detecting the plant abnormality, understanding the abnormality, deciding the response plan, executing the response plan, and team related aspects (i.e., communication, teamwork, and supervision). The data are expected to be useful for analyzing the relations between context, error modes and error causes in human performance.« less
Human Error Analysis in a Permit to Work System: A Case Study in a Chemical Plant
Jahangiri, Mehdi; Hoboubi, Naser; Rostamabadi, Akbar; Keshavarzi, Sareh; Hosseini, Ali Akbar
2015-01-01
Background A permit to work (PTW) is a formal written system to control certain types of work which are identified as potentially hazardous. However, human error in PTW processes can lead to an accident. Methods This cross-sectional, descriptive study was conducted to estimate the probability of human errors in PTW processes in a chemical plant in Iran. In the first stage, through interviewing the personnel and studying the procedure in the plant, the PTW process was analyzed using the hierarchical task analysis technique. In doing so, PTW was considered as a goal and detailed tasks to achieve the goal were analyzed. In the next step, the standardized plant analysis risk-human (SPAR-H) reliability analysis method was applied for estimation of human error probability. Results The mean probability of human error in the PTW system was estimated to be 0.11. The highest probability of human error in the PTW process was related to flammable gas testing (50.7%). Conclusion The SPAR-H method applied in this study could analyze and quantify the potential human errors and extract the required measures for reducing the error probabilities in PTW system. Some suggestions to reduce the likelihood of errors, especially in the field of modifying the performance shaping factors and dependencies among tasks are provided. PMID:27014485
Analyzing human errors in flight mission operations
NASA Technical Reports Server (NTRS)
Bruno, Kristin J.; Welz, Linda L.; Barnes, G. Michael; Sherif, Josef
1993-01-01
A long-term program is in progress at JPL to reduce cost and risk of flight mission operations through a defect prevention/error management program. The main thrust of this program is to create an environment in which the performance of the total system, both the human operator and the computer system, is optimized. To this end, 1580 Incident Surprise Anomaly reports (ISA's) from 1977-1991 were analyzed from the Voyager and Magellan projects. A Pareto analysis revealed that 38 percent of the errors were classified as human errors. A preliminary cluster analysis based on the Magellan human errors (204 ISA's) is presented here. The resulting clusters described the underlying relationships among the ISA's. Initial models of human error in flight mission operations are presented. Next, the Voyager ISA's will be scored and included in the analysis. Eventually, these relationships will be used to derive a theoretically motivated and empirically validated model of human error in flight mission operations. Ultimately, this analysis will be used to make continuous process improvements continuous process improvements to end-user applications and training requirements. This Total Quality Management approach will enable the management and prevention of errors in the future.
Human Reliability and the Cost of Doing Business
NASA Technical Reports Server (NTRS)
DeMott, Diana
2014-01-01
Most businesses recognize that people will make mistakes and assume errors are just part of the cost of doing business, but does it need to be? Companies with high risk, or major consequences, should consider the effect of human error. In a variety of industries, Human Errors have caused costly failures and workplace injuries. These have included: airline mishaps, medical malpractice, administration of medication and major oil spills have all been blamed on human error. A technique to mitigate or even eliminate some of these costly human errors is the use of Human Reliability Analysis (HRA). Various methodologies are available to perform Human Reliability Assessments that range from identifying the most likely areas for concern to detailed assessments with human error failure probabilities calculated. Which methodology to use would be based on a variety of factors that would include: 1) how people react and act in different industries, and differing expectations based on industries standards, 2) factors that influence how the human errors could occur such as tasks, tools, environment, workplace, support, training and procedure, 3) type and availability of data and 4) how the industry views risk & reliability influences ( types of emergencies, contingencies and routine tasks versus cost based concerns). The Human Reliability Assessments should be the first step to reduce, mitigate or eliminate the costly mistakes or catastrophic failures. Using Human Reliability techniques to identify and classify human error risks allows a company more opportunities to mitigate or eliminate these risks and prevent costly failures.
21st Century Human Performance.
ERIC Educational Resources Information Center
Clark, Ruth Colvin
1995-01-01
Technology can extend human memory and improve performance, but bypassing human intelligence has its dangers. Cognitive apprenticeships that compress learning experiences, provide coaching, and allow trial and error can build complex problem-solving skills and develop expertise. (SK)
Daud-Gallotti, Renata Mahfuz; Morinaga, Christian Valle; Arlindo-Rodrigues, Marcelo; Velasco, Irineu Tadeu; Arruda Martins, Milton; Tiberio, Iolanda Calvo
2011-01-01
INTRODUCTION: Patient safety is seldom assessed using objective evaluations during undergraduate medical education. OBJECTIVE: To evaluate the performance of fifth-year medical students using an objective structured clinical examination focused on patient safety after implementation of an interactive program based on adverse events recognition and disclosure. METHODS: In 2007, a patient safety program was implemented in the internal medicine clerkship of our hospital. The program focused on human error theory, epidemiology of incidents, adverse events, and disclosure. Upon completion of the program, students completed an objective structured clinical examination with five stations and standardized patients. One station focused on patient safety issues, including medical error recognition/disclosure, the patient-physician relationship and humanism issues. A standardized checklist was completed by each standardized patient to assess the performance of each student. The student's global performance at each station and performance in the domains of medical error, the patient-physician relationship and humanism were determined. The correlations between the student performances in these three domains were calculated. RESULTS: A total of 95 students participated in the objective structured clinical examination. The mean global score at the patient safety station was 87.59±1.24 points. Students' performance in the medical error domain was significantly lower than their performance on patient-physician relationship and humanistic issues. Less than 60% of students (n = 54) offered the simulated patient an apology after a medical error occurred. A significant correlation was found between scores obtained in the medical error domains and scores related to both the patient-physician relationship and humanistic domains. CONCLUSIONS: An objective structured clinical examination is a useful tool to evaluate patient safety competencies during the medical student clerkship. PMID:21876976
The application of robotics to microlaryngeal laser surgery.
Buckmire, Robert A; Wong, Yu-Tung; Deal, Allison M
2015-06-01
To evaluate the performance of human subjects, using a prototype robotic micromanipulator controller in a simulated, microlaryngeal operative setting. Observational cross-sectional study. Twenty-two human subjects with varying degrees of laser experience performed CO2 laser surgical tasks within a simulated microlaryngeal operative setting using an industry standard manual micromanipulator (MMM) and a prototype robotic micromanipulator controller (RMC). Accuracy, repeatability, and ablation consistency measures were obtained for each human subject across both conditions and for the preprogrammed RMC device. Using the standard MMM, surgeons with >10 previous laser cases performed superior to subjects with fewer cases on measures of error percentage and cumulative error (P = .045 and .03, respectively). No significant differences in performance were observed between subjects using the RMC device. In the programmed (P/A) mode, the RMC performed equivalently or superiorly to experienced human subjects on accuracy and repeatability measures, and nearly an order of magnitude better on measures of ablation consistency. The programmed RMC performed significantly better for repetition error when compared to human subjects with <100 previous laser cases (P = .04). Experienced laser surgeons perform better than novice surgeons on tasks of accuracy and repeatability using the MMM device but roughly equivalently using the novel RMC. Operated in the P/A mode, the RMC performs equivalently or superior to experienced laser surgeons using the industry standard MMM for all measured parameters, and delivers an ablation consistency nearly an order of magnitude better than human laser operators. NA. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.
Human error identification for laparoscopic surgery: Development of a motion economy perspective.
Al-Hakim, Latif; Sevdalis, Nick; Maiping, Tanaphon; Watanachote, Damrongpan; Sengupta, Shomik; Dissaranan, Charuspong
2015-09-01
This study postulates that traditional human error identification techniques fail to consider motion economy principles and, accordingly, their applicability in operating theatres may be limited. This study addresses this gap in the literature with a dual aim. First, it identifies the principles of motion economy that suit the operative environment and second, it develops a new error mode taxonomy for human error identification techniques which recognises motion economy deficiencies affecting the performance of surgeons and predisposing them to errors. A total of 30 principles of motion economy were developed and categorised into five areas. A hierarchical task analysis was used to break down main tasks of a urological laparoscopic surgery (hand-assisted laparoscopic nephrectomy) to their elements and the new taxonomy was used to identify errors and their root causes resulting from violation of motion economy principles. The approach was prospectively tested in 12 observed laparoscopic surgeries performed by 5 experienced surgeons. A total of 86 errors were identified and linked to the motion economy deficiencies. Results indicate the developed methodology is promising. Our methodology allows error prevention in surgery and the developed set of motion economy principles could be useful for training surgeons on motion economy principles. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Advancing Usability Evaluation through Human Reliability Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ronald L. Boring; David I. Gertman
2005-07-01
This paper introduces a novel augmentation to the current heuristic usability evaluation methodology. The SPAR-H human reliability analysis method was developed for categorizing human performance in nuclear power plants. Despite the specialized use of SPAR-H for safety critical scenarios, the method also holds promise for use in commercial off-the-shelf software usability evaluations. The SPAR-H method shares task analysis underpinnings with human-computer interaction, and it can be easily adapted to incorporate usability heuristics as performance shaping factors. By assigning probabilistic modifiers to heuristics, it is possible to arrive at the usability error probability (UEP). This UEP is not a literal probabilitymore » of error but nonetheless provides a quantitative basis to heuristic evaluation. When combined with a consequence matrix for usability errors, this method affords ready prioritization of usability issues.« less
Fifty Years of THERP and Human Reliability Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ronald L. Boring
2012-06-01
In 1962 at a Human Factors Society symposium, Alan Swain presented a paper introducing a Technique for Human Error Rate Prediction (THERP). This was followed in 1963 by a Sandia Laboratories monograph outlining basic human error quantification using THERP and, in 1964, by a special journal edition of Human Factors on quantification of human performance. Throughout the 1960s, Swain and his colleagues focused on collecting human performance data for the Sandia Human Error Rate Bank (SHERB), primarily in connection with supporting the reliability of nuclear weapons assembly in the US. In 1969, Swain met with Jens Rasmussen of Risø Nationalmore » Laboratory and discussed the applicability of THERP to nuclear power applications. By 1975, in WASH-1400, Swain had articulated the use of THERP for nuclear power applications, and the approach was finalized in the watershed publication of the NUREG/CR-1278 in 1983. THERP is now 50 years old, and remains the most well known and most widely used HRA method. In this paper, the author discusses the history of THERP, based on published reports and personal communication and interviews with Swain. The author also outlines the significance of THERP. The foundations of human reliability analysis are found in THERP: human failure events, task analysis, performance shaping factors, human error probabilities, dependence, event trees, recovery, and pre- and post-initiating events were all introduced in THERP. While THERP is not without its detractors, and it is showing signs of its age in the face of newer technological applications, the longevity of THERP is a testament of its tremendous significance. THERP started the field of human reliability analysis. This paper concludes with a discussion of THERP in the context of newer methods, which can be seen as extensions of or departures from Swain’s pioneering work.« less
Evidence Report: Risk of Performance Errors Due to Training Deficiencies
NASA Technical Reports Server (NTRS)
Barshi, Immanuel
2012-01-01
The Risk of Performance Errors Due to Training Deficiencies is identified by the National Aeronautics and Space Administration (NASA) Human Research Program (HRP) as a recognized risk to human health and performance in space. The HRP Program Requirements Document (PRD) defines these risks. This Evidence Report provides a summary of the evidence that has been used to identify and characterize this risk. Given that training content, timing, intervals, and delivery methods must support crew task performance, and given that training paradigms will be different for long-duration missions with increased crew autonomy, there is a risk that operators will lack the skills or knowledge necessary to complete critical tasks, resulting in flight and ground crew errors and inefficiencies, failed mission and program objectives, and an increase in crew injuries.
Doytchev, Doytchin E; Szwillus, Gerd
2009-11-01
Understanding the reasons for incident and accident occurrence is important for an organization's safety. Different methods have been developed to achieve this goal. To better understand the human behaviour in incident occurrence we propose an analysis concept that combines Fault Tree Analysis (FTA) and Task Analysis (TA). The former method identifies the root causes of an accident/incident, while the latter analyses the way people perform the tasks in their work environment and how they interact with machines or colleagues. These methods were complemented with the use of the Human Error Identification in System Tools (HEIST) methodology and the concept of Performance Shaping Factors (PSF) to deepen the insight into the error modes of an operator's behaviour. HEIST shows the external error modes that caused the human error and the factors that prompted the human to err. To show the validity of the approach, a case study at a Bulgarian Hydro power plant was carried out. An incident - the flooding of the plant's basement - was analysed by combining the afore-mentioned methods. The case study shows that Task Analysis in combination with other methods can be applied successfully to human error analysis, revealing details about erroneous actions in a realistic situation.
Cutting the Cord: Discrimination and Command Responsibility in Autonomous Lethal Weapons
2014-02-13
machine responses to identical stimuli, and it was the job of a third party human “witness” to determine which participant was man and which was...machines may be error free, but there are potential benefits to be gained through autonomy if machines can meet or exceed human performance in...lieu of human operators and reap the benefits that autonomy provides. Human and Machine Error It would be foolish to assert that either humans
A stochastic dynamic model for human error analysis in nuclear power plants
NASA Astrophysics Data System (ADS)
Delgado-Loperena, Dharma
Nuclear disasters like Three Mile Island and Chernobyl indicate that human performance is a critical safety issue, sending a clear message about the need to include environmental press and competence aspects in research. This investigation was undertaken to serve as a roadmap for studying human behavior through the formulation of a general solution equation. The theoretical model integrates models from two heretofore-disassociated disciplines (behavior specialists and technical specialists), that historically have independently studied the nature of error and human behavior; including concepts derived from fractal and chaos theory; and suggests re-evaluation of base theory regarding human error. The results of this research were based on comprehensive analysis of patterns of error, with the omnipresent underlying structure of chaotic systems. The study of patterns lead to a dynamic formulation, serving for any other formula used to study human error consequences. The search for literature regarding error yielded insight for the need to include concepts rooted in chaos theory and strange attractors---heretofore unconsidered by mainstream researchers who investigated human error in nuclear power plants or those who employed the ecological model in their work. The study of patterns obtained from the rupture of a steam generator tube (SGTR) event simulation, provided a direct application to aspects of control room operations in nuclear power plant operations. In doing so, the conceptual foundation based in the understanding of the patterns of human error analysis can be gleaned, resulting in reduced and prevent undesirable events.
Brennan, Peter A; Mitchell, David A; Holmes, Simon; Plint, Simon; Parry, David
2016-01-01
Human error is as old as humanity itself and is an appreciable cause of mistakes by both organisations and people. Much of the work related to human factors in causing error has originated from aviation where mistakes can be catastrophic not only for those who contribute to the error, but for passengers as well. The role of human error in medical and surgical incidents, which are often multifactorial, is becoming better understood, and includes both organisational issues (by the employer) and potential human factors (at a personal level). Mistakes as a result of individual human factors and surgical teams should be better recognised and emphasised. Attitudes and acceptance of preoperative briefing has improved since the introduction of the World Health Organization (WHO) surgical checklist. However, this does not address limitations or other safety concerns that are related to performance, such as stress and fatigue, emotional state, hunger, awareness of what is going on situational awareness, and other factors that could potentially lead to error. Here we attempt to raise awareness of these human factors, and highlight how they can lead to error, and how they can be minimised in our day-to-day practice. Can hospitals move from being "high risk industries" to "high reliability organisations"? Copyright © 2015 The British Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Causal Evidence from Humans for the Role of Mediodorsal Nucleus of the Thalamus in Working Memory.
Peräkylä, Jari; Sun, Lihua; Lehtimäki, Kai; Peltola, Jukka; Öhman, Juha; Möttönen, Timo; Ogawa, Keith H; Hartikainen, Kaisa M
2017-12-01
The mediodorsal nucleus of the thalamus (MD), with its extensive connections to the lateral pFC, has been implicated in human working memory and executive functions. However, this understanding is based solely on indirect evidence from human lesion and imaging studies and animal studies. Direct, causal evidence from humans is missing. To obtain direct evidence for MD's role in humans, we studied patients treated with deep brain stimulation (DBS) for refractory epilepsy. This treatment is thought to prevent the generalization of a seizure by disrupting the functioning of the patient's anterior nuclei of the thalamus (ANT) with high-frequency electric stimulation. This structure is located superior and anterior to MD, and when the DBS lead is implanted in ANT, tip contacts of the lead typically penetrate through ANT into the adjoining MD. To study the role of MD in human executive functions and working memory, we periodically disrupted and recovered MD's function with high-frequency electric stimulation using DBS contacts reaching MD while participants performed a cognitive task engaging several aspects of executive functions. We hypothesized that the efficacy of executive functions, specifically working memory, is impaired when the functioning of MD is perturbed by high-frequency stimulation. Eight participants treated with ANT-DBS for refractory epilepsy performed a computer-based test of executive functions while DBS was repeatedly switched ON and OFF at MD and at the control location (ANT). In comparison to stimulation of the control location, when MD was stimulated, participants committed 2.26 times more errors in general (total errors; OR = 2.26, 95% CI [1.69, 3.01]) and 2.86 times more working memory-related errors specifically (incorrect button presses; OR = 2.88, CI [1.95, 4.24]). Similarly, participants committed 1.81 more errors in general ( OR = 1.81, CI [1.45, 2.24]) and 2.08 times more working memory-related errors ( OR = 2.08, CI [1.57, 2.75]) in comparison to no stimulation condition. "Total errors" is a composite score consisting of basic error types and was mostly driven by working memory-related errors. The facts that MD and a control location, ANT, are only few millimeters away from each other and that their stimulation produces very different results highlight the location-specific effect of DBS rather than regionally unspecific general effect. In conclusion, disrupting and recovering MD's function with high-frequency electric stimulation modulated participants' online working memory performance providing causal, in vivo evidence from humans for the role of MD in human working memory.
How To Make the Most of Your Human: Design Considerations for Single Pilot Operations
NASA Technical Reports Server (NTRS)
Schutte, Paul C.
2015-01-01
Reconsidering the function allocation between automation and the pilot in the flight deck is the next step in improving aviation safety. The current allocation, based on who does what best, makes poor use of the pilot's resources and abilities. In some cases it may actually handicap pilots from performing their role. Improving pilot performance first lies in defining the role of the pilot - why a human is needed in the first place. The next step is allocating functions based on the needs of that role (rather than fitness), then using automation to target specific human weaknesses in performing that role. Examples are provided (some of which could be implemented in conventional cockpits now). Along the way, the definition of human error and the idea that eliminating/automating the pilot will reduce instances of human error will be challenged.
NASA: Model development for human factors interfacing
NASA Technical Reports Server (NTRS)
Smith, L. L.
1984-01-01
The results of an intensive literature review in the general topics of human error analysis, stress and job performance, and accident and safety analysis revealed no usable techniques or approaches for analyzing human error in ground or space operations tasks. A task review model is described and proposed to be developed in order to reduce the degree of labor intensiveness in ground and space operations tasks. An extensive number of annotated references are provided.
Competition between learned reward and error outcome predictions in anterior cingulate cortex.
Alexander, William H; Brown, Joshua W
2010-02-15
The anterior cingulate cortex (ACC) is implicated in performance monitoring and cognitive control. Non-human primate studies of ACC show prominent reward signals, but these are elusive in human studies, which instead show mainly conflict and error effects. Here we demonstrate distinct appetitive and aversive activity in human ACC. The error likelihood hypothesis suggests that ACC activity increases in proportion to the likelihood of an error, and ACC is also sensitive to the consequence magnitude of the predicted error. Previous work further showed that error likelihood effects reach a ceiling as the potential consequences of an error increase, possibly due to reductions in the average reward. We explored this issue by independently manipulating reward magnitude of task responses and error likelihood while controlling for potential error consequences in an Incentive Change Signal Task. The fMRI results ruled out a modulatory effect of expected reward on error likelihood effects in favor of a competition effect between expected reward and error likelihood. Dynamic causal modeling showed that error likelihood and expected reward signals are intrinsic to the ACC rather than received from elsewhere. These findings agree with interpretations of ACC activity as signaling both perceptions of risk and predicted reward. Copyright 2009 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Silva-Martinez, Jackelynne; Ellenberger, Richard; Dory, Jonathan
2017-01-01
This project aims to identify poor human factors design decisions that led to error-prone systems, or did not facilitate the flight crew making the right choices; and to verify that NASA is effectively preventing similar incidents from occurring again. This analysis was performed by reviewing significant incidents and close calls in human spaceflight identified by the NASA Johnson Space Center Safety and Mission Assurance Flight Safety Office. The review of incidents shows whether the identified human errors were due to the operational phase (flight crew and ground control) or if they initiated at the design phase (includes manufacturing and test). This classification was performed with the aid of the NASA Human Systems Integration domains. This in-depth analysis resulted in a tool that helps with the human factors classification of significant incidents and close calls in human spaceflight, which can be used to identify human errors at the operational level, and how they were or should be minimized. Current governing documents on human systems integration for both government and commercial crew were reviewed to see if current requirements, processes, training, and standard operating procedures protect the crew and ground control against these issues occurring in the future. Based on the findings, recommendations to target those areas are provided.
Foster, J D; Miskovic, D; Allison, A S; Conti, J A; Ockrim, J; Cooper, E J; Hanna, G B; Francis, N K
2016-06-01
Laparoscopic rectal resection is technically challenging, with outcomes dependent upon technical performance. No robust objective assessment tool exists for laparoscopic rectal resection surgery. This study aimed to investigate the application of the objective clinical human reliability analysis (OCHRA) technique for assessing technical performance of laparoscopic rectal surgery and explore the validity and reliability of this technique. Laparoscopic rectal cancer resection operations were described in the format of a hierarchical task analysis. Potential technical errors were defined. The OCHRA technique was used to identify technical errors enacted in videos of twenty consecutive laparoscopic rectal cancer resection operations from a single site. The procedural task, spatial location, and circumstances of all identified errors were logged. Clinical validity was assessed through correlation with clinical outcomes; reliability was assessed by test-retest. A total of 335 execution errors identified, with a median 15 per operation. More errors were observed during pelvic tasks compared with abdominal tasks (p < 0.001). Within the pelvis, more errors were observed during dissection on the right side than the left (p = 0.03). Test-retest confirmed reliability (r = 0.97, p < 0.001). A significant correlation was observed between error frequency and mesorectal specimen quality (r s = 0.52, p = 0.02) and with blood loss (r s = 0.609, p = 0.004). OCHRA offers a valid and reliable method for evaluating technical performance of laparoscopic rectal surgery.
Common medial frontal mechanisms of adaptive control in humans and rodents
Frank, Michael J.; Laubach, Mark
2013-01-01
In this report, we describe how common brain networks within the medial frontal cortex facilitate adaptive behavioral control in rodents and humans. We demonstrate that low frequency oscillations below 12 Hz are dramatically modulated after errors in humans over mid-frontal cortex and in rats within prelimbic and anterior cingulate regions of medial frontal cortex. These oscillations were phase-locked between medial frontal cortex and motor areas in both rats and humans. In rats, single neurons that encoded prior behavioral outcomes were phase-coherent with low-frequency field oscillations particularly after errors. Inactivating medial frontal regions in rats led to impaired behavioral adjustments after errors, eliminated the differential expression of low frequency oscillations after errors, and increased low-frequency spike-field coupling within motor cortex. Our results describe a novel mechanism for behavioral adaptation via low-frequency oscillations and elucidate how medial frontal networks synchronize brain activity to guide performance. PMID:24141310
Skills, rules and knowledge in aircraft maintenance: errors in context
NASA Technical Reports Server (NTRS)
Hobbs, Alan; Williamson, Ann
2002-01-01
Automatic or skill-based behaviour is generally considered to be less prone to error than behaviour directed by conscious control. However, researchers who have applied Rasmussen's skill-rule-knowledge human error framework to accidents and incidents have sometimes found that skill-based errors appear in significant numbers. It is proposed that this is largely a reflection of the opportunities for error which workplaces present and does not indicate that skill-based behaviour is intrinsically unreliable. In the current study, 99 errors reported by 72 aircraft mechanics were examined in the light of a task analysis based on observations of the work of 25 aircraft mechanics. The task analysis identified the opportunities for error presented at various stages of maintenance work packages and by the job as a whole. Once the frequency of each error type was normalized in terms of the opportunities for error, it became apparent that skill-based performance is more reliable than rule-based performance, which is in turn more reliable than knowledge-based performance. The results reinforce the belief that industrial safety interventions designed to reduce errors would best be directed at those aspects of jobs that involve rule- and knowledge-based performance.
Neurochemical enhancement of conscious error awareness.
Hester, Robert; Nandam, L Sanjay; O'Connell, Redmond G; Wagner, Joe; Strudwick, Mark; Nathan, Pradeep J; Mattingley, Jason B; Bellgrove, Mark A
2012-02-22
How the brain monitors ongoing behavior for performance errors is a central question of cognitive neuroscience. Diminished awareness of performance errors limits the extent to which humans engage in corrective behavior and has been linked to loss of insight in a number of psychiatric syndromes (e.g., attention deficit hyperactivity disorder, drug addiction). These conditions share alterations in monoamine signaling that may influence the neural mechanisms underlying error processing, but our understanding of the neurochemical drivers of these processes is limited. We conducted a randomized, double-blind, placebo-controlled, cross-over design of the influence of methylphenidate, atomoxetine, and citalopram on error awareness in 27 healthy participants. The error awareness task, a go/no-go response inhibition paradigm, was administered to assess the influence of monoaminergic agents on performance errors during fMRI data acquisition. A single dose of methylphenidate, but not atomoxetine or citalopram, significantly improved the ability of healthy volunteers to consciously detect performance errors. Furthermore, this behavioral effect was associated with a strengthening of activation differences in the dorsal anterior cingulate cortex and inferior parietal lobe during the methylphenidate condition for errors made with versus without awareness. Our results have implications for the understanding of the neurochemical underpinnings of performance monitoring and for the pharmacological treatment of a range of disparate clinical conditions that are marked by poor awareness of errors.
Managerial process improvement: a lean approach to eliminating medication delivery.
Hussain, Aftab; Stewart, LaShonda M; Rivers, Patrick A; Munchus, George
2015-01-01
Statistical evidence shows that medication errors are a major cause of injuries that concerns all health care oganizations. Despite all the efforts to improve the quality of care, the lack of understanding and inability of management to design a robust system that will strategically target those factors is a major cause of distress. The paper aims to discuss these issues. Achieving optimum organizational performance requires two key variables; work process factors and human performance factors. The approach is that healthcare administrators must take in account both variables in designing a strategy to reduce medication errors. However, strategies that will combat such phenomena require that managers and administrators understand the key factors that are causing medication delivery errors. The authors recommend that healthcare organizations implement the Toyota Production System (TPS) combined with human performance improvement (HPI) methodologies to eliminate medication delivery errors in hospitals. Despite all the efforts to improve the quality of care, there continues to be a lack of understanding and the ability of management to design a robust system that will strategically target those factors associated with medication errors. This paper proposes a solution to an ambiguous workflow process using the TPS combined with the HPI system.
DOT National Transportation Integrated Search
1999-11-01
The program implements DOT Human Factors Coordinating Committee (HFCC) recommendations for a coordinated Departmental Human Factors Research Program to advance the human-centered systems approach for enhancing transportation safety. Human error is a ...
NASA Technical Reports Server (NTRS)
Banks, Akeem
2012-01-01
This final report will summarize research that relates to human behavioral health and performance of astronauts and flight controllers. Literature reviews, data archival analyses, and ground-based analog studies that center around the risk of human space flight are being used to help mitigate human behavior and performance risks from long duration space flights. A qualitative analysis of an astronaut autobiography was completed. An analysis was also conducted on exercise countermeasure publications to show the positive affects of exercise on the risks targeted in this study. The three main risks targeted in this study are risks of behavioral and psychiatric disorders, risks of performance errors due to poor team performance, cohesion, and composition, and risks of performance errors due to sleep deprivation, circadian rhythm. These three risks focus on psychological and physiological aspects of astronauts who venture out into space on long duration space missions. The purpose of this research is to target these risks in order to help quantify, identify, and mature countermeasures and technologies required in preventing or mitigating adverse outcomes from exposure to the spaceflight environment
Diuk, Carlos; Tsai, Karin; Wallis, Jonathan; Botvinick, Matthew; Niv, Yael
2013-03-27
Studies suggest that dopaminergic neurons report a unitary, global reward prediction error signal. However, learning in complex real-life tasks, in particular tasks that show hierarchical structure, requires multiple prediction errors that may coincide in time. We used functional neuroimaging to measure prediction error signals in humans performing such a hierarchical task involving simultaneous, uncorrelated prediction errors. Analysis of signals in a priori anatomical regions of interest in the ventral striatum and the ventral tegmental area indeed evidenced two simultaneous, but separable, prediction error signals corresponding to the two levels of hierarchy in the task. This result suggests that suitably designed tasks may reveal a more intricate pattern of firing in dopaminergic neurons. Moreover, the need for downstream separation of these signals implies possible limitations on the number of different task levels that we can learn about simultaneously.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeffrey C. JOe; Ronald L. Boring
Probabilistic Risk Assessment (PRA) and Human Reliability Assessment (HRA) are important technical contributors to the United States (U.S.) Nuclear Regulatory Commission’s (NRC) risk-informed and performance based approach to regulating U.S. commercial nuclear activities. Furthermore, all currently operating commercial NPPs in the U.S. are required by federal regulation to be staffed with crews of operators. Yet, aspects of team performance are underspecified in most HRA methods that are widely used in the nuclear industry. There are a variety of "emergent" team cognition and teamwork errors (e.g., communication errors) that are 1) distinct from individual human errors, and 2) important to understandmore » from a PRA perspective. The lack of robust models or quantification of team performance is an issue that affects the accuracy and validity of HRA methods and models, leading to significant uncertainty in estimating HEPs. This paper describes research that has the objective to model and quantify team dynamics and teamwork within NPP control room crews for risk informed applications, thereby improving the technical basis of HRA, which improves the risk-informed approach the NRC uses to regulate the U.S. commercial nuclear industry.« less
Human Factors Research Under Ground-Based and Space Conditions. Part 1
NASA Technical Reports Server (NTRS)
1997-01-01
Session TP2 includes short reports concerning: (1) Human Factors Engineering of the International space Station Human Research Facility; (2) Structured Methods for Identifying and Correcting Potential Human Errors in Space operation; (3) An Improved Procedure for Selecting Astronauts for Extended Space Missions; (4) The NASA Performance Assessment Workstation: Cognitive Performance During Head-Down Bedrest; (5) Cognitive Performance Aboard the Life and Microgravity Spacelab; and (6) Psychophysiological Reactivity Under MIR-Simulation and Real Micro-G.
Effect of Job Satisfaction and Motivation towards Employee's Performance in XYZ Shipping Company
ERIC Educational Resources Information Center
Octaviannand, Ramona; Pandjaitan, Nurmala K.; Kuswanto, Sadikin
2017-01-01
In the digital and globalization era which are demanding for tech progress. Human resources need to work more closely and concentration. Small errors can lead to fatal errors that result in high costs for the company. The loss of motivation at work influences employee satisfaction and have a negative impact on employee performance. Research was…
Automated Identification of Abnormal Adult EEGs
López, S.; Suarez, G.; Jungreis, D.; Obeid, I.; Picone, J.
2016-01-01
The interpretation of electroencephalograms (EEGs) is a process that is still dependent on the subjective analysis of the examiners. Though interrater agreement on critical events such as seizures is high, it is much lower on subtler events (e.g., when there are benign variants). The process used by an expert to interpret an EEG is quite subjective and hard to replicate by machine. The performance of machine learning technology is far from human performance. We have been developing an interpretation system, AutoEEG, with a goal of exceeding human performance on this task. In this work, we are focusing on one of the early decisions made in this process – whether an EEG is normal or abnormal. We explore two baseline classification algorithms: k-Nearest Neighbor (kNN) and Random Forest Ensemble Learning (RF). A subset of the TUH EEG Corpus was used to evaluate performance. Principal Components Analysis (PCA) was used to reduce the dimensionality of the data. kNN achieved a 41.8% detection error rate while RF achieved an error rate of 31.7%. These error rates are significantly lower than those obtained by random guessing based on priors (49.5%). The majority of the errors were related to misclassification of normal EEGs. PMID:27195311
Design and Validation of an Infrared Badal Optometer for Laser Speckle (IBOLS)
Teel, Danielle F. W.; Copland, R. James; Jacobs, Robert J.; Wells, Thad; Neal, Daniel R.; Thibos, Larry N.
2009-01-01
Purpose To validate the design of an infrared wavefront aberrometer with a Badal optometer employing the principle of laser speckle generated by a spinning disk and infrared light. The instrument was designed for subjective meridional refraction in infrared light by human patients. Methods Validation employed a model eye with known refractive error determined with an objective infrared wavefront aberrometer. The model eye was used to produce a speckle pattern on an artificial retina with controlled amounts of ametropia introduced with auxiliary ophthalmic lenses. A human observer performed the psychophysical task of observing the speckle pattern (with the aid of a video camera sensitive to infrared radiation) formed on the artificial retina. Refraction was performed by adjusting the vergence of incident light with the Badal optometer to nullify the motion of laser speckle. Validation of the method was performed for different levels of spherical ametropia and for various configurations of an astigmatic model eye. Results Subjective measurements of meridional refractive error over the range −4D to + 4D agreed with astigmatic refractive errors predicted by the power of the model eye in the meridian of motion of the spinning disk. Conclusions Use of a Badal optometer to control laser speckle is a valid method for determining subjective refractive error at infrared wavelengths. Such an instrument will be useful for comparing objective measures of refractive error obtained for the human eye with autorefractors and wavefront aberrometers that employ infrared radiation. PMID:18772719
Assisting Movement Training and Execution With Visual and Haptic Feedback.
Ewerton, Marco; Rother, David; Weimar, Jakob; Kollegger, Gerrit; Wiemeyer, Josef; Peters, Jan; Maeda, Guilherme
2018-01-01
In the practice of motor skills in general, errors in the execution of movements may go unnoticed when a human instructor is not available. In this case, a computer system or robotic device able to detect movement errors and propose corrections would be of great help. This paper addresses the problem of how to detect such execution errors and how to provide feedback to the human to correct his/her motor skill using a general, principled methodology based on imitation learning. The core idea is to compare the observed skill with a probabilistic model learned from expert demonstrations. The intensity of the feedback is regulated by the likelihood of the model given the observed skill. Based on demonstrations, our system can, for example, detect errors in the writing of characters with multiple strokes. Moreover, by using a haptic device, the Haption Virtuose 6D, we demonstrate a method to generate haptic feedback based on a distribution over trajectories, which could be used as an auxiliary means of communication between an instructor and an apprentice. Additionally, given a performance measurement, the haptic device can help the human discover and perform better movements to solve a given task. In this case, the human first tries a few times to solve the task without assistance. Our framework, in turn, uses a reinforcement learning algorithm to compute haptic feedback, which guides the human toward better solutions.
The effects of multiple aerospace environmental stressors on human performance
NASA Technical Reports Server (NTRS)
Popper, S. E.; Repperger, D. W.; Mccloskey, K.; Tripp, L. D.
1992-01-01
An extended Fitt's law paradigm reaction time (RT) task was used to evaluate the effects of acceleration on human performance in the Dynamic Environment Simulator (DES) at Armstrong Laboratory, Wright-Patterson AFB, Ohio. This effort was combined with an evaluation of the standard CSU-13 P anti-gravity suit versus three configurations of a 'retrograde inflation anti-G suit'. Results indicated that RT and error rates increased 17 percent and 14 percent respectively from baseline to the end of the simulated aerial combat maneuver and that the most common error was pressing too few buttons.
NASA Technical Reports Server (NTRS)
Uhlemann, H.; Geiser, G.
1975-01-01
Multivariable manual compensatory tracking experiments were carried out in order to determine typical strategies of the human operator and conditions for improvement of his performance if one of the visual displays of the tracking errors is supplemented by an auditory feedback. Because the tracking error of the system which is only visually displayed is found to decrease, but not in general that of the auditorally supported system, it was concluded that the auditory feedback unloads the visual system of the operator who can then concentrate on the remaining exclusively visual displays.
NASA Technical Reports Server (NTRS)
Foyle, David C.; Goodman, Allen; Hooley, Becky L.
2003-01-01
An overview is provided of the Human Performance Modeling (HPM) element within the NASA Aviation Safety Program (AvSP). Two separate model development tracks for performance modeling of real-world aviation environments are described: the first focuses on the advancement of cognitive modeling tools for system design, while the second centers on a prescriptive engineering model of activity tracking for error detection and analysis. A progressive implementation strategy for both tracks is discussed in which increasingly more complex, safety-relevant applications are undertaken to extend the state-of-the-art, as well as to reveal potential human-system vulnerabilities in the aviation domain. Of particular interest is the ability to predict the precursors to error and to assess potential mitigation strategies associated with the operational use of future flight deck technologies.
Agreement between Computerized and Human Assessment of Performance on the Ruff Figural Fluency Test
Elderson, Martin F.; Pham, Sander; van Eersel, Marlise E. A.; Wolffenbuttel, Bruce H. R.; Kok, Johan; Gansevoort, Ron T.; Tucha, Oliver; van der Klauw, Melanie M.; Slaets, Joris P. J.
2016-01-01
The Ruff Figural Fluency Test (RFFT) is a sensitive test for nonverbal fluency suitable for all age groups. However, assessment of performance on the RFFT is time-consuming and may be affected by interrater differences. Therefore, we developed computer software specifically designed to analyze performance on the RFFT by automated pattern recognition. The aim of this study was to compare assessment by the new software with conventional assessment by human raters. The software was developed using data from the Lifelines Cohort Study and validated in an independent cohort of the Prevention of Renal and Vascular End Stage Disease (PREVEND) study. The total study population included 1,761 persons: 54% men; mean age (SD), 58 (10) years. All RFFT protocols were assessed by the new software and two independent human raters (criterion standard). The mean number of unique designs (SD) was 81 (29) and the median number of perseverative errors (interquartile range) was 9 (4 to 16). The intraclass correlation coefficient (ICC) between the computerized and human assessment was 0.994 (95%CI, 0.988 to 0.996; p<0.001) and 0.991 (95%CI, 0.990 to 0.991; p<0.001) for the number of unique designs and perseverative errors, respectively. The mean difference (SD) between the computerized and human assessment was -1.42 (2.78) and +0.02 (1.94) points for the number of unique designs and perseverative errors, respectively. This was comparable to the agreement between two independent human assessments: ICC, 0.995 (0.994 to 0.995; p<0.001) and 0.985 (0.982 to 0.988; p<0.001), and mean difference (SD), -0.44 (2.98) and +0.56 (2.36) points for the number of unique designs and perseverative errors, respectively. We conclude that the agreement between the computerized and human assessment was very high and comparable to the agreement between two independent human assessments. Therefore, the software is an accurate tool for the assessment of performance on the RFFT. PMID:27661083
Tsai, Karin; Wallis, Jonathan; Botvinick, Matthew
2013-01-01
Studies suggest that dopaminergic neurons report a unitary, global reward prediction error signal. However, learning in complex real-life tasks, in particular tasks that show hierarchical structure, requires multiple prediction errors that may coincide in time. We used functional neuroimaging to measure prediction error signals in humans performing such a hierarchical task involving simultaneous, uncorrelated prediction errors. Analysis of signals in a priori anatomical regions of interest in the ventral striatum and the ventral tegmental area indeed evidenced two simultaneous, but separable, prediction error signals corresponding to the two levels of hierarchy in the task. This result suggests that suitably designed tasks may reveal a more intricate pattern of firing in dopaminergic neurons. Moreover, the need for downstream separation of these signals implies possible limitations on the number of different task levels that we can learn about simultaneously. PMID:23536092
Performance Data Errors in Air Carrier Operations: Causes and Countermeasures
NASA Technical Reports Server (NTRS)
Berman, Benjamin A.; Dismukes, R Key; Jobe, Kimberly K.
2012-01-01
Several airline accidents have occurred in recent years as the result of erroneous weight or performance data used to calculate V-speeds, flap/trim settings, required runway lengths, and/or required climb gradients. In this report we consider 4 recent studies of performance data error, report our own study of ASRS-reported incidents, and provide countermeasures that can reduce vulnerability to accidents caused by performance data errors. Performance data are generated through a lengthy process involving several employee groups and computer and/or paper-based systems. Although much of the airline indUStry 's concern has focused on errors pilots make in entering FMS data, we determined that errors occur at every stage of the process and that errors by ground personnel are probably at least as frequent and certainly as consequential as errors by pilots. Most of the errors we examined could in principle have been trapped by effective use of existing procedures or technology; however, the fact that they were not trapped anywhere indicates the need for better countermeasures. Existing procedures are often inadequately designed to mesh with the ways humans process information. Because procedures often do not take into account the ways in which information flows in actual flight ops and time pressures and interruptions experienced by pilots and ground personnel, vulnerability to error is greater. Some aspects of NextGen operations may exacerbate this vulnerability. We identify measures to reduce the number of errors and to help catch the errors that occur.
Human Error as an Emergent Property of Action Selection and Task Place-Holding.
Tamborello, Franklin P; Trafton, J Gregory
2017-05-01
A computational process model could explain how the dynamic interaction of human cognitive mechanisms produces each of multiple error types. With increasing capability and complexity of technological systems, the potential severity of consequences of human error is magnified. Interruption greatly increases people's error rates, as does the presence of other information to maintain in an active state. The model executed as a software-instantiated Monte Carlo simulation. It drew on theoretical constructs such as associative spreading activation for prospective memory, explicit rehearsal strategies as a deliberate cognitive operation to aid retrospective memory, and decay. The model replicated the 30% effect of interruptions on postcompletion error in Ratwani and Trafton's Stock Trader task, the 45% interaction effect on postcompletion error of working memory capacity and working memory load from Byrne and Bovair's Phaser Task, as well as the 5% perseveration and 3% omission effects of interruption from the UNRAVEL Task. Error classes including perseveration, omission, and postcompletion error fall naturally out of the theory. The model explains post-interruption error in terms of task state representation and priming for recall of subsequent steps. Its performance suggests that task environments providing more cues to current task state will mitigate error caused by interruption. For example, interfaces could provide labeled progress indicators or facilities for operators to quickly write notes about their task states when interrupted.
Error Rates in Users of Automatic Face Recognition Software
White, David; Dunn, James D.; Schmid, Alexandra C.; Kemp, Richard I.
2015-01-01
In recent years, wide deployment of automatic face recognition systems has been accompanied by substantial gains in algorithm performance. However, benchmarking tests designed to evaluate these systems do not account for the errors of human operators, who are often an integral part of face recognition solutions in forensic and security settings. This causes a mismatch between evaluation tests and operational accuracy. We address this by measuring user performance in a face recognition system used to screen passport applications for identity fraud. Experiment 1 measured target detection accuracy in algorithm-generated ‘candidate lists’ selected from a large database of passport images. Accuracy was notably poorer than in previous studies of unfamiliar face matching: participants made over 50% errors for adult target faces, and over 60% when matching images of children. Experiment 2 then compared performance of student participants to trained passport officers–who use the system in their daily work–and found equivalent performance in these groups. Encouragingly, a group of highly trained and experienced “facial examiners” outperformed these groups by 20 percentage points. We conclude that human performance curtails accuracy of face recognition systems–potentially reducing benchmark estimates by 50% in operational settings. Mere practise does not attenuate these limits, but superior performance of trained examiners suggests that recruitment and selection of human operators, in combination with effective training and mentorship, can improve the operational accuracy of face recognition systems. PMID:26465631
Online Deviation Detection for Medical Processes
Christov, Stefan C.; Avrunin, George S.; Clarke, Lori A.
2014-01-01
Human errors are a major concern in many medical processes. To help address this problem, we are investigating an approach for automatically detecting when performers of a medical process deviate from the acceptable ways of performing that process as specified by a detailed process model. Such deviations could represent errors and, thus, detecting and reporting deviations as they occur could help catch errors before harm is done. In this paper, we identify important issues related to the feasibility of the proposed approach and empirically evaluate the approach for two medical procedures, chemotherapy and blood transfusion. For the evaluation, we use the process models to generate sample process executions that we then seed with synthetic errors. The process models describe the coordination of activities of different process performers in normal, as well as in exceptional situations. The evaluation results suggest that the proposed approach could be applied in clinical settings to help catch errors before harm is done. PMID:25954343
NASA Astrophysics Data System (ADS)
Nunez, F.; Romero, A.; Clua, J.; Mas, J.; Tomas, A.; Catalan, A.; Castellsaguer, J.
2005-08-01
MARES (Muscle Atrophy Research and Exercise System) is a computerized ergometer for neuromuscular research to be flown and installed onboard the International Space Station in 2007. Validity of data acquired depends on controlling and reducing all significant error sources. One of them is the misalignment of the joint rotation axis with respect to the motor axis.The error induced on the measurements is proportional to the misalignment between both axis. Therefore, the restraint system's performance is critical [1]. MARES HRS (Human Restraint System) assures alignment within an acceptable range while performing the exercise (results: elbow movement:13.94mm+/-5.45, Knee movement: 22.36mm+/- 6.06 ) and reproducibility of human positioning (results: elbow movement: 2.82mm+/-1.56, Knee movement 7.45mm+/-4.8 ). These results allow limiting measurement errors induced by misalignment.
Associations between errors and contributing factors in aircraft maintenance
NASA Technical Reports Server (NTRS)
Hobbs, Alan; Williamson, Ann
2003-01-01
In recent years cognitive error models have provided insights into the unsafe acts that lead to many accidents in safety-critical environments. Most models of accident causation are based on the notion that human errors occur in the context of contributing factors. However, there is a lack of published information on possible links between specific errors and contributing factors. A total of 619 safety occurrences involving aircraft maintenance were reported using a self-completed questionnaire. Of these occurrences, 96% were related to the actions of maintenance personnel. The types of errors that were involved, and the contributing factors associated with those actions, were determined. Each type of error was associated with a particular set of contributing factors and with specific occurrence outcomes. Among the associations were links between memory lapses and fatigue and between rule violations and time pressure. Potential applications of this research include assisting with the design of accident prevention strategies, the estimation of human error probabilities, and the monitoring of organizational safety performance.
Procedural error monitoring and smart checklists
NASA Technical Reports Server (NTRS)
Palmer, Everett
1990-01-01
Human beings make and usually detect errors routinely. The same mental processes that allow humans to cope with novel problems can also lead to error. Bill Rouse has argued that errors are not inherently bad but their consequences may be. He proposes the development of error-tolerant systems that detect errors and take steps to prevent the consequences of the error from occurring. Research should be done on self and automatic detection of random and unanticipated errors. For self detection, displays should be developed that make the consequences of errors immediately apparent. For example, electronic map displays graphically show the consequences of horizontal flight plan entry errors. Vertical profile displays should be developed to make apparent vertical flight planning errors. Other concepts such as energy circles could also help the crew detect gross flight planning errors. For automatic detection, systems should be developed that can track pilot activity, infer pilot intent and inform the crew of potential errors before their consequences are realized. Systems that perform a reasonableness check on flight plan modifications by checking route length and magnitude of course changes are simple examples. Another example would be a system that checked the aircraft's planned altitude against a data base of world terrain elevations. Information is given in viewgraph form.
ERIC Educational Resources Information Center
Pourtois, Gilles; Vocat, Roland; N'Diaye, Karim; Spinelli, Laurent; Seeck, Margitta; Vuilleumier, Patrik
2010-01-01
We studied error monitoring in a human patient with unique implantation of depth electrodes in both the left dorsal cingulate gyrus and medial temporal lobe prior to surgery. The patient performed a speeded go/nogo task and made a substantial number of commission errors (false alarms). As predicted, intracranial Local Field Potentials (iLFPs) in…
Causes and Prevention of Laparoscopic Bile Duct Injuries
Way, Lawrence W.; Stewart, Lygia; Gantert, Walter; Liu, Kingsway; Lee, Crystine M.; Whang, Karen; Hunter, John G.
2003-01-01
Objective To apply human performance concepts in an attempt to understand the causes of and prevent laparoscopic bile duct injury. Summary Background Data Powerful conceptual advances have been made in understanding the nature and limits of human performance. Applying these findings in high-risk activities, such as commercial aviation, has allowed the work environment to be restructured to substantially reduce human error. Methods The authors analyzed 252 laparoscopic bile duct injuries according to the principles of the cognitive science of visual perception, judgment, and human error. The injury distribution was class I, 7%; class II, 22%; class III, 61%; and class IV, 10%. The data included operative radiographs, clinical records, and 22 videotapes of original operations. Results The primary cause of error in 97% of cases was a visual perceptual illusion. Faults in technical skill were present in only 3% of injuries. Knowledge and judgment errors were contributory but not primary. Sixty-four injuries (25%) were recognized at the index operation; the surgeon identified the problem early enough to limit the injury in only 15 (6%). In class III injuries the common duct, erroneously believed to be the cystic duct, was deliberately cut. This stemmed from an illusion of object form due to a specific uncommon configuration of the structures and the heuristic nature (unconscious assumptions) of human visual perception. The videotapes showed the persuasiveness of the illusion, and many operative reports described the operation as routine. Class II injuries resulted from a dissection too close to the common hepatic duct. Fundamentally an illusion, it was contributed to in some instances by working too deep in the triangle of Calot. Conclusions These data show that errors leading to laparoscopic bile duct injuries stem principally from misperception, not errors of skill, knowledge, or judgment. The misperception was so compelling that in most cases the surgeon did not recognize a problem. Even when irregularities were identified, corrective feedback did not occur, which is characteristic of human thinking under firmly held assumptions. These findings illustrate the complexity of human error in surgery while simultaneously providing insights. They demonstrate that automatically attributing technical complications to behavioral factors that rely on the assumption of control is likely to be wrong. Finally, this study shows that there are only a few points within laparoscopic cholecystectomy where the complication-causing errors occur, which suggests that focused training to heighten vigilance might be able to decrease the incidence of bile duct injury. PMID:12677139
NASA Human Research Program: Behavioral Health and Performance Program Element
NASA Technical Reports Server (NTRS)
Leveton, Lauren B.
2009-01-01
This viewgraph presentation reviews the performance errors associated with sleep loss, fatigue and psychomotor factors during manned space flight. Short and long term behavioral health factors are also addressed
Yordanova, Juliana; Albrecht, Björn; Uebel, Henrik; Kirov, Roumen; Banaschewski, Tobias; Rothenberger, Aribert; Kolev, Vasil
2011-06-01
The maintenance of stable goal-directed behaviour is a hallmark of conscious executive control in humans. Notably, both correct and error human actions may have a subconscious activation-based determination. One possible source of subconscious interference may be the default mode network that, in contrast to attentional network, manifests intrinsic oscillations at very low (<0.1 Hz) frequencies. In the present study, we analyse the time dynamics of performance accuracy to search for multisecond periodic fluctuations of error occurrence. Attentional lapses in attention deficit/hyperactivity disorder are proposed to originate from interferences from intrinsically oscillating networks. Identifying periodic error fluctuations with a frequency<0.1 Hz in patients with attention deficit/hyperactivity disorder would provide a behavioural evidence for such interferences. Performance was monitored during a visual flanker task in 92 children (7- to 16-year olds), 47 with attention deficit/hyperactivity disorder, combined type and 45 healthy controls. Using an original approach, the time distribution of error occurrence was analysed in the frequency and time-frequency domains in order to detect rhythmic periodicity. Major results demonstrate that in both patients and controls, error behaviour was characterized by multisecond rhythmic fluctuations with a period of ∼12 s, appearing with a delay after transition to task. Only in attention deficit/hyperactivity disorder, was there an additional 'pathological' oscillation of error generation, which determined periodic drops of performance accuracy each 20-30 s. Thus, in patients, periodic error fluctuations were modulated by two independent oscillatory patterns. The findings demonstrate that: (i) attentive behaviour of children is determined by multisecond regularities; and (ii) a unique additional periodicity guides performance fluctuations in patients. These observations may re-conceptualize the understanding of attentive behaviour beyond the executive top-down control and may reveal new origins of psychopathological behaviours in attention deficit/hyperactivity disorder.
Development and validation of Aviation Causal Contributors for Error Reporting Systems (ACCERS).
Baker, David P; Krokos, Kelley J
2007-04-01
This investigation sought to develop a reliable and valid classification system for identifying and classifying the underlying causes of pilot errors reported under the Aviation Safety Action Program (ASAP). ASAP is a voluntary safety program that air carriers may establish to study pilot and crew performance on the line. In ASAP programs, similar to the Aviation Safety Reporting System, pilots self-report incidents by filing a short text description of the event. The identification of contributors to errors is critical if organizations are to improve human performance, yet it is difficult for analysts to extract this information from text narratives. A taxonomy was needed that could be used by pilots to classify the causes of errors. After completing a thorough literature review, pilot interviews and a card-sorting task were conducted in Studies 1 and 2 to develop the initial structure of the Aviation Causal Contributors for Event Reporting Systems (ACCERS) taxonomy. The reliability and utility of ACCERS was then tested in studies 3a and 3b by having pilots independently classify the primary and secondary causes of ASAP reports. The results provided initial evidence for the internal and external validity of ACCERS. Pilots were found to demonstrate adequate levels of agreement with respect to their category classifications. ACCERS appears to be a useful system for studying human error captured under pilot ASAP reports. Future work should focus on how ACCERS is organized and whether it can be used or modified to classify human error in ASAP programs for other aviation-related job categories such as dispatchers. Potential applications of this research include systems in which individuals self-report errors and that attempt to extract and classify the causes of those events.
Yang, Yana; Hua, Changchun; Guan, Xinping
2016-03-01
Due to the cognitive limitations of the human operator and lack of complete information about the remote environment, the work performance of such teleoperation systems cannot be guaranteed in most cases. However, some practical tasks conducted by the teleoperation system require high performances, such as tele-surgery needs satisfactory high speed and more precision control results to guarantee patient' health status. To obtain some satisfactory performances, the error constrained control is employed by applying the barrier Lyapunov function (BLF). With the constrained synchronization errors, some high performances, such as, high convergence speed, small overshoot, and an arbitrarily predefined small residual constrained synchronization error can be achieved simultaneously. Nevertheless, like many classical control schemes only the asymptotic/exponential convergence, i.e., the synchronization errors converge to zero as time goes infinity can be achieved with the error constrained control. It is clear that finite time convergence is more desirable. To obtain a finite-time synchronization performance, the terminal sliding mode (TSM)-based finite time control method is developed for teleoperation system with position error constrained in this paper. First, a new nonsingular fast terminal sliding mode (NFTSM) surface with new transformed synchronization errors is proposed. Second, adaptive neural network system is applied for dealing with the system uncertainties and the external disturbances. Third, the BLF is applied to prove the stability and the nonviolation of the synchronization errors constraints. Finally, some comparisons are conducted in simulation and experiment results are also presented to show the effectiveness of the proposed method.
Mentoring Human Performance - 12480
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geis, John A.; Haugen, Christian N.
2012-07-01
Although the positive effects of implementing a human performance approach to operations can be hard to quantify, many organizations and industry areas are finding tangible benefits to such a program. Recently, a unique mentoring program was established and implemented focusing on improving the performance of managers, supervisors, and work crews, using the principles of Human Performance Improvement (HPI). The goal of this mentoring was to affect behaviors and habits that reliably implement the principles of HPI to ensure continuous improvement in implementation of an Integrated Safety Management System (ISMS) within a Conduct of Operations framework. Mentors engaged with personnel inmore » a one-on-one, or one-on-many dialogue, which focused on what behaviors were observed, what factors underlie the behaviors, and what changes in behavior could prevent errors or events, and improve performance. A senior management sponsor was essential to gain broad management support. A clear charter and management plan describing the goals, objectives, methodology, and expected outcomes was established. Mentors were carefully selected with senior management endorsement. Mentors were assigned to projects and work teams based on the following three criteria: 1) knowledge of the work scope; 2) experience in similar project areas; and 3) perceived level of trust they would have with project management, supervision, and work teams. This program was restructured significantly when the American Reinvestment and Recovery Act (ARRA) and the associated funding came to an end. The program was restructured based on an understanding of the observations, attributed successes and identified shortfalls, and the consolidation of those lessons. Mentoring the application of proven methods for improving human performance was shown effective at increasing success in day-to-day activities and increasing confidence and level of skill of supervisors. While mentoring program effectiveness is difficult to measure, and return on investment is difficult to quantify, especially in complex and large organizations where the ability to directly correlate causal factors can be challenging, the evidence presented by Sydney Dekker, James Reason, and others who study the field of human factors does assert managing and reducing error is possible. Employment of key behaviors-HPI techniques and skills-can be shown to have a significant impact on error rates. Our mentoring program demonstrated reduced error rates and corresponding improvements in safety and production. Improved behaviors are the result, of providing a culture with consistent, clear expectations from leadership, and processes and methods applied consistently to error prevention. Mentoring, as envisioned and executed in this program, was effective in helping shift organizational culture and effectively improving safety and production. (authors)« less
NASA Technical Reports Server (NTRS)
Mercer, Joey S.; Bienert, Nancy; Gomez, Ashley; Hunt, Sarah; Kraut, Joshua; Martin, Lynne; Morey, Susan; Green, Steven M.; Prevot, Thomas; Wu, Minghong G.
2013-01-01
A Human-In-The-Loop air traffic control simulation investigated the impact of uncertainties in trajectory predictions on NextGen Trajectory-Based Operations concepts, seeking to understand when the automation would become unacceptable to controllers or when performance targets could no longer be met. Retired air traffic controllers staffed two en route transition sectors, delivering arrival traffic to the northwest corner-post of Atlanta approach control under time-based metering operations. Using trajectory-based decision-support tools, the participants worked the traffic under varying levels of wind forecast error and aircraft performance model error, impacting the ground automations ability to make accurate predictions. Results suggest that the controllers were able to maintain high levels of performance, despite even the highest levels of trajectory prediction errors.
Royo Sánchez, Ana Cristina; Aguilar Martín, Juan José; Santolaria Mazo, Jorge
2014-12-01
Motion capture systems are often used for checking and analyzing human motion in biomechanical applications. It is important, in this context, that the systems provide the best possible accuracy. Among existing capture systems, optical systems are those with the highest accuracy. In this paper, the development of a new calibration procedure for optical human motion capture systems is presented. The performance and effectiveness of that new calibration procedure are also checked by experimental validation. The new calibration procedure consists of two stages. In the first stage, initial estimators of intrinsic and extrinsic parameters are sought. The camera calibration method used in this stage is the one proposed by Tsai. These parameters are determined from the camera characteristics, the spatial position of the camera, and the center of the capture volume. In the second stage, a simultaneous nonlinear optimization of all parameters is performed to identify the optimal values, which minimize the objective function. The objective function, in this case, minimizes two errors. The first error is the distance error between two markers placed in a wand. The second error is the error of position and orientation of the retroreflective markers of a static calibration object. The real co-ordinates of the two objects are calibrated in a co-ordinate measuring machine (CMM). The OrthoBio system is used to validate the new calibration procedure. Results are 90% lower than those from the previous calibration software and broadly comparable with results from a similarly configured Vicon system.
Schönberg, Tom; Daw, Nathaniel D; Joel, Daphna; O'Doherty, John P
2007-11-21
The computational framework of reinforcement learning has been used to forward our understanding of the neural mechanisms underlying reward learning and decision-making behavior. It is known that humans vary widely in their performance in decision-making tasks. Here, we used a simple four-armed bandit task in which subjects are almost evenly split into two groups on the basis of their performance: those who do learn to favor choice of the optimal action and those who do not. Using models of reinforcement learning we sought to determine the neural basis of these intrinsic differences in performance by scanning both groups with functional magnetic resonance imaging. We scanned 29 subjects while they performed the reward-based decision-making task. Our results suggest that these two groups differ markedly in the degree to which reinforcement learning signals in the striatum are engaged during task performance. While the learners showed robust prediction error signals in both the ventral and dorsal striatum during learning, the nonlearner group showed a marked absence of such signals. Moreover, the magnitude of prediction error signals in a region of dorsal striatum correlated significantly with a measure of behavioral performance across all subjects. These findings support a crucial role of prediction error signals, likely originating from dopaminergic midbrain neurons, in enabling learning of action selection preferences on the basis of obtained rewards. Thus, spontaneously observed individual differences in decision making performance demonstrate the suggested dependence of this type of learning on the functional integrity of the dopaminergic striatal system in humans.
INFORMATION OR NOISE. AN INVESTIGATION OF RESPONSE ERRORS.
BEHAVIOR, SOCIAL PSYCHOLOGY), (* SOCIAL SCIENCES, SCIENTIFIC RESEARCH), PERFORMANCE(HUMAN), REACTION(PSYCHOLOGY), TEST CONSTRUCTION(PSYCHOLOGY), ATTITUDES(PSYCHOLOGY), MOTIVATION, PSYCHOLOGICAL TESTS, PUBLIC OPINION
The effects of error augmentation on learning to walk on a narrow balance beam.
Domingo, Antoinette; Ferris, Daniel P
2010-10-01
Error augmentation during training has been proposed as a means to facilitate motor learning due to the human nervous system's reliance on performance errors to shape motor commands. We studied the effects of error augmentation on short-term learning of walking on a balance beam to determine whether it had beneficial effects on motor performance. Four groups of able-bodied subjects walked on a treadmill-mounted balance beam (2.5-cm wide) before and after 30 min of training. During training, two groups walked on the beam with a destabilization device that augmented error (Medium and High Destabilization groups). A third group walked on a narrower beam (1.27-cm) to augment error (Narrow). The fourth group practiced walking on the 2.5-cm balance beam (Wide). Subjects in the Wide group had significantly greater improvements after training than the error augmentation groups. The High Destabilization group had significantly less performance gains than the Narrow group in spite of similar failures per minute during training. In a follow-up experiment, a fifth group of subjects (Assisted) practiced with a device that greatly reduced catastrophic errors (i.e., stepping off the beam) but maintained similar pelvic movement variability. Performance gains were significantly greater in the Wide group than the Assisted group, indicating that catastrophic errors were important for short-term learning. We conclude that increasing errors during practice via destabilization and a narrower balance beam did not improve short-term learning of beam walking. In addition, the presence of qualitatively catastrophic errors seems to improve short-term learning of walking balance.
Human factors evaluation of remote afterloading brachytherapy. Volume 2, Function and task analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Callan, J.R.; Gwynne, J.W. III; Kelly, T.T.
1995-05-01
A human factors project on the use of nuclear by-product material to treat cancer using remotely operated afterloaders was undertaken by the Nuclear Regulatory Commission. The purpose of the project was to identify factors that contribute to human error in the system for remote afterloading brachytherapy (RAB). This report documents the findings from the first phase of the project, which involved an extensive function and task analysis of RAB. This analysis identified the functions and tasks in RAB, made preliminary estimates of the likelihood of human error in each task, and determined the skills needed to perform each RAB task.more » The findings of the function and task analysis served as the foundation for the remainder of the project, which evaluated four major aspects of the RAB system linked to human error: human-system interfaces; procedures and practices; training and qualifications of RAB staff; and organizational practices and policies. At its completion, the project identified and prioritized areas for recommended NRC and industry attention based on all of the evaluations and analyses.« less
Human performance cognitive-behavioral modeling: a benefit for occupational safety.
Gore, Brian F
2002-01-01
Human Performance Modeling (HPM) is a computer-aided job analysis software methodology used to generate predictions of complex human-automation integration and system flow patterns with the goal of improving operator and system safety. The use of HPM tools has recently been increasing due to reductions in computational cost, augmentations in the tools' fidelity, and usefulness in the generated output. An examination of an Air Man-machine Integration Design and Analysis System (Air MIDAS) model evaluating complex human-automation integration currently underway at NASA Ames Research Center will highlight the importance to occupational safety of considering both cognitive and physical aspects of performance when researching human error.
Human performance cognitive-behavioral modeling: a benefit for occupational safety
NASA Technical Reports Server (NTRS)
Gore, Brian F.
2002-01-01
Human Performance Modeling (HPM) is a computer-aided job analysis software methodology used to generate predictions of complex human-automation integration and system flow patterns with the goal of improving operator and system safety. The use of HPM tools has recently been increasing due to reductions in computational cost, augmentations in the tools' fidelity, and usefulness in the generated output. An examination of an Air Man-machine Integration Design and Analysis System (Air MIDAS) model evaluating complex human-automation integration currently underway at NASA Ames Research Center will highlight the importance to occupational safety of considering both cognitive and physical aspects of performance when researching human error.
Error reduction in EMG signal decomposition
Kline, Joshua C.
2014-01-01
Decomposition of the electromyographic (EMG) signal into constituent action potentials and the identification of individual firing instances of each motor unit in the presence of ambient noise are inherently probabilistic processes, whether performed manually or with automated algorithms. Consequently, they are subject to errors. We set out to classify and reduce these errors by analyzing 1,061 motor-unit action-potential trains (MUAPTs), obtained by decomposing surface EMG (sEMG) signals recorded during human voluntary contractions. Decomposition errors were classified into two general categories: location errors representing variability in the temporal localization of each motor-unit firing instance and identification errors consisting of falsely detected or missed firing instances. To mitigate these errors, we developed an error-reduction algorithm that combines multiple decomposition estimates to determine a more probable estimate of motor-unit firing instances with fewer errors. The performance of the algorithm is governed by a trade-off between the yield of MUAPTs obtained above a given accuracy level and the time required to perform the decomposition. When applied to a set of sEMG signals synthesized from real MUAPTs, the identification error was reduced by an average of 1.78%, improving the accuracy to 97.0%, and the location error was reduced by an average of 1.66 ms. The error-reduction algorithm in this study is not limited to any specific decomposition strategy. Rather, we propose it be used for other decomposition methods, especially when analyzing precise motor-unit firing instances, as occurs when measuring synchronization. PMID:25210159
Uncorrected refractive errors.
Naidoo, Kovin S; Jaggernath, Jyoti
2012-01-01
Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.
Effects of noise on the performance of a memory decision response task
NASA Technical Reports Server (NTRS)
Lawton, B. W.
1972-01-01
An investigation has been made to determine the effects of noise on human performance. Fourteen subjects performed a memory-decision-response task in relative quiet and while listening to tape recorded noises. Analysis of the data obtained indicates that performance was degraded in the presence of noise. Significant increases in problem solution times were found for impulsive noise conditions as compared with times found for the no-noise condition. Performance accuracy was also degraded. Significantly more error responses occurred at higher noise levels; a direct or positive relation was found between error responses and noise level experienced by the subjects.
Complacency and Automation Bias in the Use of Imperfect Automation.
Wickens, Christopher D; Clegg, Benjamin A; Vieane, Alex Z; Sebok, Angelia L
2015-08-01
We examine the effects of two different kinds of decision-aiding automation errors on human-automation interaction (HAI), occurring at the first failure following repeated exposure to correctly functioning automation. The two errors are incorrect advice, triggering the automation bias, and missing advice, reflecting complacency. Contrasts between analogous automation errors in alerting systems, rather than decision aiding, have revealed that alerting false alarms are more problematic to HAI than alerting misses are. Prior research in decision aiding, although contrasting the two aiding errors (incorrect vs. missing), has confounded error expectancy. Participants performed an environmental process control simulation with and without decision aiding. For those with the aid, automation dependence was created through several trials of perfect aiding performance, and an unexpected automation error was then imposed in which automation was either gone (one group) or wrong (a second group). A control group received no automation support. The correct aid supported faster and more accurate diagnosis and lower workload. The aid failure degraded all three variables, but "automation wrong" had a much greater effect on accuracy, reflecting the automation bias, than did "automation gone," reflecting the impact of complacency. Some complacency was manifested for automation gone, by a longer latency and more modest reduction in accuracy. Automation wrong, creating the automation bias, appears to be a more problematic form of automation error than automation gone, reflecting complacency. Decision-aiding automation should indicate its lower degree of confidence in uncertain environments to avoid the automation bias. © 2015, Human Factors and Ergonomics Society.
A cognitive taxonomy of medical errors.
Zhang, Jiajie; Patel, Vimla L; Johnson, Todd R; Shortliffe, Edward H
2004-06-01
Propose a cognitive taxonomy of medical errors at the level of individuals and their interactions with technology. Use cognitive theories of human error and human action to develop the theoretical foundations of the taxonomy, develop the structure of the taxonomy, populate the taxonomy with examples of medical error cases, identify cognitive mechanisms for each category of medical error under the taxonomy, and apply the taxonomy to practical problems. Four criteria were used to evaluate the cognitive taxonomy. The taxonomy should be able (1) to categorize major types of errors at the individual level along cognitive dimensions, (2) to associate each type of error with a specific underlying cognitive mechanism, (3) to describe how and explain why a specific error occurs, and (4) to generate intervention strategies for each type of error. The proposed cognitive taxonomy largely satisfies the four criteria at a theoretical and conceptual level. Theoretically, the proposed cognitive taxonomy provides a method to systematically categorize medical errors at the individual level along cognitive dimensions, leads to a better understanding of the underlying cognitive mechanisms of medical errors, and provides a framework that can guide future studies on medical errors. Practically, it provides guidelines for the development of cognitive interventions to decrease medical errors and foundation for the development of medical error reporting system that not only categorizes errors but also identifies problems and helps to generate solutions. To validate this model empirically, we will next be performing systematic experimental studies.
A system dynamic simulation model for managing the human error in power tools industries
NASA Astrophysics Data System (ADS)
Jamil, Jastini Mohd; Shaharanee, Izwan Nizal Mohd
2017-10-01
In the era of modern and competitive life of today, every organization will face the situations in which the work does not proceed as planned when there is problems occur in which it had to be delay. However, human error is often cited as the culprit. The error that made by the employees would cause them have to spend additional time to identify and check for the error which in turn could affect the normal operations of the company as well as the company's reputation. Employee is a key element of the organization in running all of the activities of organization. Hence, work performance of the employees is a crucial factor in organizational success. The purpose of this study is to identify the factors that cause the increasing errors make by employees in the organization by using system dynamics approach. The broadly defined targets in this study are employees in the Regional Material Field team from purchasing department in power tools industries. Questionnaires were distributed to the respondents to obtain their perceptions on the root cause of errors make by employees in the company. The system dynamics model was developed to simulate the factor of the increasing errors make by employees and its impact. The findings of this study showed that the increasing of error make by employees was generally caused by the factors of workload, work capacity, job stress, motivation and performance of employees. However, this problem could be solve by increased the number of employees in the organization.
Cognition in Space Workshop. 1; Metrics and Models
NASA Technical Reports Server (NTRS)
Woolford, Barbara; Fielder, Edna
2005-01-01
"Cognition in Space Workshop I: Metrics and Models" was the first in a series of workshops sponsored by NASA to develop an integrated research and development plan supporting human cognition in space exploration. The workshop was held in Chandler, Arizona, October 25-27, 2004. The participants represented academia, government agencies, and medical centers. This workshop addressed the following goal of the NASA Human System Integration Program for Exploration: to develop a program to manage risks due to human performance and human error, specifically ones tied to cognition. Risks range from catastrophic error to degradation of efficiency and failure to accomplish mission goals. Cognition itself includes memory, decision making, initiation of motor responses, sensation, and perception. Four subgoals were also defined at the workshop as follows: (1) NASA needs to develop a human-centered design process that incorporates standards for human cognition, human performance, and assessment of human interfaces; (2) NASA needs to identify and assess factors that increase risks associated with cognition; (3) NASA needs to predict risks associated with cognition; and (4) NASA needs to mitigate risk, both prior to actual missions and in real time. This report develops the material relating to these four subgoals.
New methodology to reconstruct in 2-D the cuspal enamel of modern human lower molars.
Modesto-Mata, Mario; García-Campos, Cecilia; Martín-Francés, Laura; Martínez de Pinillos, Marina; García-González, Rebeca; Quintino, Yuliet; Canals, Antoni; Lozano, Marina; Dean, M Christopher; Martinón-Torres, María; Bermúdez de Castro, José María
2017-08-01
In the last years different methodologies have been developed to reconstruct worn teeth. In this article, we propose a new 2-D methodology to reconstruct the worn enamel of lower molars. Our main goals are to reconstruct molars with a high level of accuracy when measuring relevant histological variables and to validate the methodology calculating the errors associated with the measurements. This methodology is based on polynomial regression equations, and has been validated using two different dental variables: cuspal enamel thickness and crown height of the protoconid. In order to perform the validation process, simulated worn modern human molars were employed. The associated errors of the measurements were also estimated applying methodologies previously proposed by other authors. The mean percentage error estimated in reconstructed molars for these two variables in comparison with their own real values is -2.17% for the cuspal enamel thickness of the protoconid and -3.18% for the crown height of the protoconid. This error significantly improves the results of other methodologies, both in the interobserver error and in the accuracy of the measurements. The new methodology based on polynomial regressions can be confidently applied to the reconstruction of cuspal enamel of lower molars, as it improves the accuracy of the measurements and reduces the interobserver error. The present study shows that it is important to validate all methodologies in order to know the associated errors. This new methodology can be easily exportable to other modern human populations, the human fossil record and forensic sciences. © 2017 Wiley Periodicals, Inc.
Sensitivity to prediction error in reach adaptation
Haith, Adrian M.; Harran, Michelle D.; Shadmehr, Reza
2012-01-01
It has been proposed that the brain predicts the sensory consequences of a movement and compares it to the actual sensory feedback. When the two differ, an error signal is formed, driving adaptation. How does an error in one trial alter performance in the subsequent trial? Here we show that the sensitivity to error is not constant but declines as a function of error magnitude. That is, one learns relatively less from large errors compared with small errors. We performed an experiment in which humans made reaching movements and randomly experienced an error in both their visual and proprioceptive feedback. Proprioceptive errors were created with force fields, and visual errors were formed by perturbing the cursor trajectory to create a visual error that was smaller, the same size, or larger than the proprioceptive error. We measured single-trial adaptation and calculated sensitivity to error, i.e., the ratio of the trial-to-trial change in motor commands to error size. We found that for both sensory modalities sensitivity decreased with increasing error size. A reanalysis of a number of previously published psychophysical results also exhibited this feature. Finally, we asked how the brain might encode sensitivity to error. We reanalyzed previously published probabilities of cerebellar complex spikes (CSs) and found that this probability declined with increasing error size. From this we posit that a CS may be representative of the sensitivity to error, and not error itself, a hypothesis that may explain conflicting reports about CSs and their relationship to error. PMID:22773782
Use of modeling to identify vulnerabilities to human error in laparoscopy.
Funk, Kenneth H; Bauer, James D; Doolen, Toni L; Telasha, David; Nicolalde, R Javier; Reeber, Miriam; Yodpijit, Nantakrit; Long, Myra
2010-01-01
This article describes an exercise to investigate the utility of modeling and human factors analysis in understanding surgical processes and their vulnerabilities to medical error. A formal method to identify error vulnerabilities was developed and applied to a test case of Veress needle insertion during closed laparoscopy. A team of 2 surgeons, a medical assistant, and 3 engineers used hierarchical task analysis and Integrated DEFinition language 0 (IDEF0) modeling to create rich models of the processes used in initial port creation. Using terminology from a standardized human performance database, detailed task descriptions were written for 4 tasks executed in the process of inserting the Veress needle. Key terms from the descriptions were used to extract from the database generic errors that could occur. Task descriptions with potential errors were translated back into surgical terminology. Referring to the process models and task descriptions, the team used a modified failure modes and effects analysis (FMEA) to consider each potential error for its probability of occurrence, its consequences if it should occur and be undetected, and its probability of detection. The resulting likely and consequential errors were prioritized for intervention. A literature-based validation study confirmed the significance of the top error vulnerabilities identified using the method. Ongoing work includes design and evaluation of procedures to correct the identified vulnerabilities and improvements to the modeling and vulnerability identification methods. Copyright 2010 AAGL. Published by Elsevier Inc. All rights reserved.
Human Error: A Concept Analysis
NASA Technical Reports Server (NTRS)
Hansen, Frederick D.
2007-01-01
Human error is the subject of research in almost every industry and profession of our times. This term is part of our daily language and intuitively understood by most people however, it would be premature to assume that everyone's understanding of human error s the same. For example, human error is used to describe the outcome or consequence of human action, the causal factor of an accident, deliberate violations,a nd the actual action taken by a human being. As a result, researchers rarely agree on the either a specific definition or how to prevent human error. The purpose of this article is to explore the specific concept of human error using Concept Analysis as described by Walker and Avant (1995). The concept of human error is examined as currently used in the literature of a variety of industries and professions. Defining attributes and examples of model, borderline, and contrary cases are described. The antecedents and consequences of human error are also discussed and a definition of human error is offered.
Radiology's Achilles' heel: error and variation in the interpretation of the Röntgen image.
Robinson, P J
1997-11-01
The performance of the human eye and brain has failed to keep pace with the enormous technical progress in the first full century of radiology. Errors and variations in interpretation now represent the weakest aspect of clinical imaging. Those interpretations which differ from the consensus view of a panel of "experts" may be regarded as errors; where experts fail to achieve consensus, differing reports are regarded as "observer variation". Errors arise from poor technique, failures of perception, lack of knowledge and misjudgments. Observer variation is substantial and should be taken into account when different diagnostic methods are compared; in many cases the difference between observers outweighs the difference between techniques. Strategies for reducing error include attention to viewing conditions, training of the observers, availability of previous films and relevant clinical data, dual or multiple reporting, standardization of terminology and report format, and assistance from computers. Digital acquisition and display will probably not affect observer variation but the performance of radiologists, as measured by receiver operating characteristic (ROC) analysis, may be improved by computer-directed search for specific image features. Other current developments show that where image features can be comprehensively described, computer analysis can replace the perception function of the observer, whilst the function of interpretation can in some cases be performed better by artificial neural networks. However, computer-assisted diagnosis is still in its infancy and complete replacement of the human observer is as yet a remote possibility.
Three dimensional tracking with misalignment between display and control axes
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.; Tyler, Mitchell; Kim, Won S.; Stark, Lawrence
1992-01-01
Human operators confronted with misaligned display and control frames of reference performed three dimensional, pursuit tracking in virtual environment and virtual space simulations. Analysis of the components of the tracking errors in the perspective displays presenting virtual space showed that components of the error due to visual motor misalignment may be linearly separated from those associated with the mismatch between display and control coordinate systems. Tracking performance improved with several hours practice despite previous reports that such improvement did not take place.
NASA Astrophysics Data System (ADS)
Coyne, Kevin Anthony
The safe operation of complex systems such as nuclear power plants requires close coordination between the human operators and plant systems. In order to maintain an adequate level of safety following an accident or other off-normal event, the operators often are called upon to perform complex tasks during dynamic situations with incomplete information. The safety of such complex systems can be greatly improved if the conditions that could lead operators to make poor decisions and commit erroneous actions during these situations can be predicted and mitigated. The primary goal of this research project was the development and validation of a cognitive model capable of simulating nuclear plant operator decision-making during accident conditions. Dynamic probabilistic risk assessment methods can improve the prediction of human error events by providing rich contextual information and an explicit consideration of feedback arising from man-machine interactions. The Accident Dynamics Simulator paired with the Information, Decision, and Action in a Crew context cognitive model (ADS-IDAC) shows promise for predicting situational contexts that might lead to human error events, particularly knowledge driven errors of commission. ADS-IDAC generates a discrete dynamic event tree (DDET) by applying simple branching rules that reflect variations in crew responses to plant events and system status changes. Branches can be generated to simulate slow or fast procedure execution speed, skipping of procedure steps, reliance on memorized information, activation of mental beliefs, variations in control inputs, and equipment failures. Complex operator mental models of plant behavior that guide crew actions can be represented within the ADS-IDAC mental belief framework and used to identify situational contexts that may lead to human error events. This research increased the capabilities of ADS-IDAC in several key areas. The ADS-IDAC computer code was improved to support additional branching events and provide a better representation of the IDAC cognitive model. An operator decision-making engine capable of responding to dynamic changes in situational context was implemented. The IDAC human performance model was fully integrated with a detailed nuclear plant model in order to realistically simulate plant accident scenarios. Finally, the improved ADS-IDAC model was calibrated, validated, and updated using actual nuclear plant crew performance data. This research led to the following general conclusions: (1) A relatively small number of branching rules are capable of efficiently capturing a wide spectrum of crew-to-crew variabilities. (2) Compared to traditional static risk assessment methods, ADS-IDAC can provide a more realistic and integrated assessment of human error events by directly determining the effect of operator behaviors on plant thermal hydraulic parameters. (3) The ADS-IDAC approach provides an efficient framework for capturing actual operator performance data such as timing of operator actions, mental models, and decision-making activities.
Sharing control with haptics: seamless driver support from manual to automatic control.
Mulder, Mark; Abbink, David A; Boer, Erwin R
2012-10-01
Haptic shared control was investigated as a human-machine interface that can intuitively share control between drivers and an automatic controller for curve negotiation. As long as automation systems are not fully reliable, a role remains for the driver to be vigilant to the system and the environment to catch any automation errors. The conventional binary switches between supervisory and manual control has many known issues, and haptic shared control is a promising alternative. A total of 42 respondents of varying age and driving experience participated in a driving experiment in a fixed-base simulator, in which curve negotiation behavior during shared control was compared to during manual control, as well as to three haptic tunings of an automatic controller without driver intervention. Under the experimental conditions studied, the main beneficial effect of haptic shared control compared to manual control was that less control activity (16% in steering wheel reversal rate, 15% in standard deviation of steering wheel angle) was needed for realizing an improved safety performance (e.g., 11% in peak lateral error). Full automation removed the need for any human control activity and improved safety performance (e.g., 35% in peak lateral error) but put the human in a supervisory position. Haptic shared control kept the driver in the loop, with enhanced performance at reduced control activity, mitigating the known issues that plague full automation. Haptic support for vehicular control ultimately seeks to intuitively combine human intelligence and creativity with the benefits of automation systems.
Human error mitigation initiative (HEMI) : summary report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, Susan M.; Ramos, M. Victoria; Wenner, Caren A.
2004-11-01
Despite continuing efforts to apply existing hazard analysis methods and comply with requirements, human errors persist across the nuclear weapons complex. Due to a number of factors, current retroactive and proactive methods to understand and minimize human error are highly subjective, inconsistent in numerous dimensions, and are cumbersome to characterize as thorough. An alternative and proposed method begins with leveraging historical data to understand what the systemic issues are and where resources need to be brought to bear proactively to minimize the risk of future occurrences. An illustrative analysis was performed using existing incident databases specific to Pantex weapons operationsmore » indicating systemic issues associated with operating procedures that undergo notably less development rigor relative to other task elements such as tooling and process flow. Future recommended steps to improve the objectivity, consistency, and thoroughness of hazard analysis and mitigation were delineated.« less
NASA Technical Reports Server (NTRS)
Diorio, Kimberly A.
2002-01-01
A process task analysis effort was undertaken by Dynacs Inc. commencing in June 2002 under contract from NASA YA-D6. Funding was provided through NASA's Ames Research Center (ARC), Code M/HQ, and Industrial Engineering and Safety (IES). The John F. Kennedy Space Center (KSC) Engineering Development Contract (EDC) Task Order was 5SMA768. The scope of the effort was to conduct a Human Factors Process Failure Modes and Effects Analysis (HF PFMEA) of a hazardous activity and provide recommendations to eliminate or reduce the effects of errors caused by human factors. The Liquid Oxygen (LOX) Pump Acceptance Test Procedure (ATP) was selected for this analysis. The HF PFMEA table (see appendix A) provides an analysis of six major categories evaluated for this study. These categories include Personnel Certification, Test Procedure Format, Test Procedure Safety Controls, Test Article Data, Instrumentation, and Voice Communication. For each specific requirement listed in appendix A, the following topics were addressed: Requirement, Potential Human Error, Performance-Shaping Factors, Potential Effects of the Error, Barriers and Controls, Risk Priority Numbers, and Recommended Actions. This report summarizes findings and gives recommendations as determined by the data contained in appendix A. It also includes a discussion of technology barriers and challenges to performing task analyses, as well as lessons learned. The HF PFMEA table in appendix A recommends the use of accepted and required safety criteria in order to reduce the risk of human error. The items with the highest risk priority numbers should receive the greatest amount of consideration. Implementation of the recommendations will result in a safer operation for all personnel.
Experimental investigation of control/display augmentation effects in a compensatory tracking task
NASA Technical Reports Server (NTRS)
Garg, Sanjay; Schmidt, David K.
1988-01-01
The effects of control/display augmentation on human performance and workload have been investigated for closed-loop, continuous-tracking tasks by a real-time, man-in-the-loop simulation study. The experimental results obtained indicate that only limited improvement in actual tracking performance is obtainable through display augmentation alone; with a very high level of display augmentation, tracking error will actually deteriorate. Tracking performance improves when status information is furnished for reasonable levels of display quickening; again, very high quickening levels lead to tracking error deterioration due to the incompatibility between the status information and the quickened signal.
Obstetric Neuraxial Drug Administration Errors: A Quantitative and Qualitative Analytical Review.
Patel, Santosh; Loveridge, Robert
2015-12-01
Drug administration errors in obstetric neuraxial anesthesia can have devastating consequences. Although fully recognizing that they represent "only the tip of the iceberg," published case reports/series of these errors were reviewed in detail with the aim of estimating the frequency and the nature of these errors. We identified case reports and case series from MEDLINE and performed a quantitative analysis of the involved drugs, error setting, source of error, the observed complications, and any therapeutic interventions. We subsequently performed a qualitative analysis of the human factors involved and proposed modifications to practice. Twenty-nine cases were identified. Various drugs were given in error, but no direct effects on the course of labor, mode of delivery, or neonatal outcome were reported. Four maternal deaths from the accidental intrathecal administration of tranexamic acid were reported, all occurring after delivery of the fetus. A range of hemodynamic and neurologic signs and symptoms were noted, but the most commonly reported complication was the failure of the intended neuraxial anesthetic technique. Several human factors were present; most common factors were drug storage issues and similar drug appearance. Four practice recommendations were identified as being likely to have prevented the errors. The reported errors exposed latent conditions within health care systems. We suggest that the implementation of the following processes may decrease the risk of these types of drug errors: (1) Careful reading of the label on any drug ampule or syringe before the drug is drawn up or injected; (2) labeling all syringes; (3) checking labels with a second person or a device (such as a barcode reader linked to a computer) before the drug is drawn up or administered; and (4) use of non-Luer lock connectors on all epidural/spinal/combined spinal-epidural devices. Further study is required to determine whether routine use of these processes will reduce drug error.
Effects of auditory radio interference on a fine, continuous, open motor skill.
Lazar, J M; Koceja, D M; Morris, H H
1995-06-01
The effects of human speech on a fine, continuous, and open motor skill were examined. A tape of auditory human radio traffic was injected into a tank gunnery simulator during each training session for 4 wk. of training for 3 hr. a week. The dependent variables were identification time, fire time, kill time, systems errors, and acquisition errors. These were measured by the Unit Conduct Of Fire Trainer (UCOFT). The interference was interjected into the UCOFT Tank Table VIII gunnery test. A Solomon four-group design was used. A 2 x 2 analysis of variance was used to assess whether interference gunnery training resulted in improvements in interference posttest scores. During the first three weeks of training, the interference group committed 106% more systems errors and 75% more acquisition errors than the standard group. The interference training condition was associated with a significant improvement from pre- to posttest of 44% in over-all UCOFT scores; however, when examined on the posttest the standard training did not improve performance significantly over the same period. It was concluded that auditory radio interference degrades performance of this fine, continuous, open motor skill, and interference training appears to abate the effects of this degradation.
Seeing the Errors You Feel Enhances Locomotor Performance but Not Learning.
Roemmich, Ryan T; Long, Andrew W; Bastian, Amy J
2016-10-24
In human motor learning, it is thought that the more information we have about our errors, the faster we learn. Here, we show that additional error information can lead to improved motor performance without any concomitant improvement in learning. We studied split-belt treadmill walking that drives people to learn a new gait pattern using sensory prediction errors detected by proprioceptive feedback. When we also provided visual error feedback, participants acquired the new walking pattern far more rapidly and showed accelerated restoration of the normal walking pattern during washout. However, when the visual error feedback was removed during either learning or washout, errors reappeared with performance immediately returning to the level expected based on proprioceptive learning alone. These findings support a model with two mechanisms: a dual-rate adaptation process that learns invariantly from sensory prediction error detected by proprioception and a visual-feedback-dependent process that monitors learning and corrects residual errors but shows no learning itself. We show that our voluntary correction model accurately predicted behavior in multiple situations where visual feedback was used to change acquisition of new walking patterns while the underlying learning was unaffected. The computational and behavioral framework proposed here suggests that parallel learning and error correction systems allow us to rapidly satisfy task demands without necessarily committing to learning, as the relative permanence of learning may be inappropriate or inefficient when facing environments that are liable to change. Copyright © 2016 Elsevier Ltd. All rights reserved.
Kasuga, Shoko; Kurata, Makiko; Liu, Meigen; Ushiba, Junichi
2015-05-01
Human's sophisticated motor learning system paradoxically interferes with motor performance when visual information is mirror-reversed (MR), because normal movement error correction further aggravates the error. This error-increasing mechanism makes performing even a simple reaching task difficult, but is overcome by alterations in the error correction rule during the trials. To isolate factors that trigger learners to change the error correction rule, we manipulated the gain of visual angular errors when participants made arm-reaching movements with mirror-reversed visual feedback, and compared the rule alteration timing between groups with normal or reduced gain. Trial-by-trial changes in the visual angular error was tracked to explain the timing of the change in the error correction rule. Under both gain conditions, visual angular errors increased under the MR transformation, and suddenly decreased after 3-5 trials with increase. The increase became degressive at different amplitude between the two groups, nearly proportional to the visual gain. The findings suggest that the alteration of the error-correction rule is not dependent on the amplitude of visual angular errors, and possibly determined by the number of trials over which the errors increased or statistical property of the environment. The current results encourage future intensive studies focusing on the exact rule-change mechanism. Copyright © 2014 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.
Detecting ‘Wrong Blood in Tube’ Errors: Evaluation of a Bayesian Network Approach
Doctor, Jason N.; Strylewicz, Greg
2010-01-01
Objective In an effort to address the problem of laboratory errors, we develop and evaluate a method to detect mismatched specimens from nationally collected blood laboratory data in two experiments. Methods In Experiment 1 and 2 using blood labs from National Health and Nutrition Examination Survey (NHANES) and values derived from the Diabetes Prevention Program (DPP) respectively, a proportion of glucose and HbA1c specimens were randomly mismatched. A Bayesian network that encoded probabilistic relationships among analytes was used to predict mismatches. In Experiment 1 the performance of the network was compared against existing error detection software. In Experiment 2 the network was compared against 11 human experts recruited from the American Academy of Clinical Chemists. Results were compared via area under the receiver-operating characteristics curves (AUCs) and with agreement statistics. Results In Experiment 1 the network was most predictive of mismatches that produced clinically significant discrepancies between true and mismatched scores ((AUC of 0.87 (±0.04) for HbA1c and 0.83 (±0.02) for glucose), performed well in identifying errors among those self-reporting diabetes (N = 329) (AUC = 0.79 (± 0.02)) and performed significantly better than the established approach it was tested against (in all cases p < .0.05). In Experiment 2 it performed better (and in no case worse) than 7 of the 11 human experts. Average percent agreement was 0.79. and Kappa (κ) was 0.59, between experts and the Bayesian network. Conclusions Bayesian network can accurately identify mismatched specimens. The algorithm is best at identifying mismatches that result in a clinically significant magnitude of error. PMID:20566275
Russ, Alissa L; Zillich, Alan J; Melton, Brittany L; Russell, Scott A; Chen, Siying; Spina, Jeffrey R; Weiner, Michael; Johnson, Elizabette G; Daggy, Joanne K; McManus, M Sue; Hawsey, Jason M; Puleo, Anthony G; Doebbeling, Bradley N; Saleem, Jason J
2014-01-01
Objective To apply human factors engineering principles to improve alert interface design. We hypothesized that incorporating human factors principles into alerts would improve usability, reduce workload for prescribers, and reduce prescribing errors. Materials and methods We performed a scenario-based simulation study using a counterbalanced, crossover design with 20 Veterans Affairs prescribers to compare original versus redesigned alerts. We redesigned drug–allergy, drug–drug interaction, and drug–disease alerts based upon human factors principles. We assessed usability (learnability of redesign, efficiency, satisfaction, and usability errors), perceived workload, and prescribing errors. Results Although prescribers received no training on the design changes, prescribers were able to resolve redesigned alerts more efficiently (median (IQR): 56 (47) s) compared to the original alerts (85 (71) s; p=0.015). In addition, prescribers rated redesigned alerts significantly higher than original alerts across several dimensions of satisfaction. Redesigned alerts led to a modest but significant reduction in workload (p=0.042) and significantly reduced the number of prescribing errors per prescriber (median (range): 2 (1–5) compared to original alerts: 4 (1–7); p=0.024). Discussion Aspects of the redesigned alerts that likely contributed to better prescribing include design modifications that reduced usability-related errors, providing clinical data closer to the point of decision, and displaying alert text in a tabular format. Displaying alert text in a tabular format may help prescribers extract information quickly and thereby increase responsiveness to alerts. Conclusions This simulation study provides evidence that applying human factors design principles to medication alerts can improve usability and prescribing outcomes. PMID:24668841
Russ, Alissa L; Zillich, Alan J; Melton, Brittany L; Russell, Scott A; Chen, Siying; Spina, Jeffrey R; Weiner, Michael; Johnson, Elizabette G; Daggy, Joanne K; McManus, M Sue; Hawsey, Jason M; Puleo, Anthony G; Doebbeling, Bradley N; Saleem, Jason J
2014-10-01
To apply human factors engineering principles to improve alert interface design. We hypothesized that incorporating human factors principles into alerts would improve usability, reduce workload for prescribers, and reduce prescribing errors. We performed a scenario-based simulation study using a counterbalanced, crossover design with 20 Veterans Affairs prescribers to compare original versus redesigned alerts. We redesigned drug-allergy, drug-drug interaction, and drug-disease alerts based upon human factors principles. We assessed usability (learnability of redesign, efficiency, satisfaction, and usability errors), perceived workload, and prescribing errors. Although prescribers received no training on the design changes, prescribers were able to resolve redesigned alerts more efficiently (median (IQR): 56 (47) s) compared to the original alerts (85 (71) s; p=0.015). In addition, prescribers rated redesigned alerts significantly higher than original alerts across several dimensions of satisfaction. Redesigned alerts led to a modest but significant reduction in workload (p=0.042) and significantly reduced the number of prescribing errors per prescriber (median (range): 2 (1-5) compared to original alerts: 4 (1-7); p=0.024). Aspects of the redesigned alerts that likely contributed to better prescribing include design modifications that reduced usability-related errors, providing clinical data closer to the point of decision, and displaying alert text in a tabular format. Displaying alert text in a tabular format may help prescribers extract information quickly and thereby increase responsiveness to alerts. This simulation study provides evidence that applying human factors design principles to medication alerts can improve usability and prescribing outcomes. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Deep Networks Can Resemble Human Feed-forward Vision in Invariant Object Recognition
Kheradpisheh, Saeed Reza; Ghodrati, Masoud; Ganjtabesh, Mohammad; Masquelier, Timothée
2016-01-01
Deep convolutional neural networks (DCNNs) have attracted much attention recently, and have shown to be able to recognize thousands of object categories in natural image databases. Their architecture is somewhat similar to that of the human visual system: both use restricted receptive fields, and a hierarchy of layers which progressively extract more and more abstracted features. Yet it is unknown whether DCNNs match human performance at the task of view-invariant object recognition, whether they make similar errors and use similar representations for this task, and whether the answers depend on the magnitude of the viewpoint variations. To investigate these issues, we benchmarked eight state-of-the-art DCNNs, the HMAX model, and a baseline shallow model and compared their results to those of humans with backward masking. Unlike in all previous DCNN studies, we carefully controlled the magnitude of the viewpoint variations to demonstrate that shallow nets can outperform deep nets and humans when variations are weak. When facing larger variations, however, more layers were needed to match human performance and error distributions, and to have representations that are consistent with human behavior. A very deep net with 18 layers even outperformed humans at the highest variation level, using the most human-like representations. PMID:27601096
The human factors of quality and QA in R D environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hill, S.G.
1990-01-01
Achieving quality is a human activity. It is therefore important to consider the human in the design, development and evaluation of work processes and environments in an effort to enhance human performance and minimize error. It is also important to allow for individual differences when considering human factors issues. Human Factors is the field of study which can provide information on integrating the human into the system. Human factors and quality are related for the customer of R D work, R D personnel who perform the work, and the quality professional who overviews the process of quality in the work.more » 18 refs., 1 fig.« less
Human Factors Directions for Civil Aviation
NASA Technical Reports Server (NTRS)
Hart, Sandra G.
2002-01-01
Despite considerable progress in understanding human capabilities and limitations, incorporating human factors into aircraft design, operation, and certification, and the emergence of new technologies designed to reduce workload and enhance human performance in the system, most aviation accidents still involve human errors. Such errors occur as a direct or indirect result of untimely, inappropriate, or erroneous actions (or inactions) by apparently well-trained and experienced pilots, controllers, and maintainers. The field of human factors has solved many of the more tractable problems related to simple ergonomics, cockpit layout, symbology, and so on. We have learned much about the relationships between people and machines, but know less about how to form successful partnerships between humans and the information technologies that are beginning to play a central role in aviation. Significant changes envisioned in the structure of the airspace, pilots and controllers' roles and responsibilities, and air/ground technologies will require a similarly significant investment in human factors during the next few decades to ensure the effective integration of pilots, controllers, dispatchers, and maintainers into the new system. Many of the topics that will be addressed are not new because progress in crucial areas, such as eliminating human error, has been slow. A multidisciplinary approach that capitalizes upon human studies and new classes of information, computational models, intelligent analytical tools, and close collaborations with organizations that build, operate, and regulate aviation technology will ensure that the field of human factors meets the challenge.
Naidoo, Kovin S; Jaggernath, Jyoti
2012-01-01
Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship. PMID:22944755
Brain processing of visual information during fast eye movements maintains motor performance.
Panouillères, Muriel; Gaveau, Valérie; Socasau, Camille; Urquizar, Christian; Pélisson, Denis
2013-01-01
Movement accuracy depends crucially on the ability to detect errors while actions are being performed. When inaccuracies occur repeatedly, both an immediate motor correction and a progressive adaptation of the motor command can unfold. Of all the movements in the motor repertoire of humans, saccadic eye movements are the fastest. Due to the high speed of saccades, and to the impairment of visual perception during saccades, a phenomenon called "saccadic suppression", it is widely believed that the adaptive mechanisms maintaining saccadic performance depend critically on visual error signals acquired after saccade completion. Here, we demonstrate that, contrary to this widespread view, saccadic adaptation can be based entirely on visual information presented during saccades. Our results show that visual error signals introduced during saccade execution--by shifting a visual target at saccade onset and blanking it at saccade offset--induce the same level of adaptation as error signals, presented for the same duration, but after saccade completion. In addition, they reveal that this processing of intra-saccadic visual information for adaptation depends critically on visual information presented during the deceleration phase, but not the acceleration phase, of the saccade. These findings demonstrate that the human central nervous system can use short intra-saccadic glimpses of visual information for motor adaptation, and they call for a reappraisal of current models of saccadic adaptation.
Safety in the operating theatre--part 1: interpersonal relationships and team performance
NASA Technical Reports Server (NTRS)
Schaefer, H. G.; Helmreich, R. L.; Scheidegger, D.
1995-01-01
The authors examine the application of interpersonal human factors training on operating room (OR) personnel. Mortality studies of OR deaths and critical incident studies of anesthesia are examined to determine the role of human error in OR incidents. Theoretical models of system vulnerability to accidents are presented with emphasis on a systems approach to OR performance. Input, process, and outcome factors are discussed in detail.
Expert Performance and Time Pressure: Implications for Automation Failures in Aviation
2016-09-30
Sciences , 7, 454-459. Fitts, P. M. (Ed.), (1951). Human engineering for an effective air navigation and control system. Washington, DC: National...expert performance. Implications for the aviation domain are discussed. 15. SUBJECT TERMS Decision Making , Time Pressure, Error, Situational Awareness...automation interaction has been a challenge for human factors for quite some time and its relevance continues to grow (e.g., Bainbridge, 1983; de Winter
Altitude deviations: Breakdowns of an error-tolerant system
NASA Technical Reports Server (NTRS)
Palmer, Everett A.; Hutchins, Edwin L.; Ritter, Richard D.; Vancleemput, Inge
1993-01-01
Pilot reports of aviation incidents to the Aviation Safety Reporting System (ASRS) provide a window on the problems occurring in today's airline cockpits. The narratives of 10 pilot reports of errors made in the automation-assisted altitude-change task are used to illustrate some of the issues of pilots interacting with automatic systems. These narratives are then used to construct a description of the cockpit as an information processing system. The analysis concentrates on the error-tolerant properties of the system and on how breakdowns can occasionally occur. An error-tolerant system can detect and correct its internal processing errors. The cockpit system consists of two or three pilots supported by autoflight, flight-management, and alerting systems. These humans and machines have distributed access to clearance information and perform redundant processing of information. Errors can be detected as deviations from either expected behavior or as deviations from expected information. Breakdowns in this system can occur when the checking and cross-checking tasks that give the system its error-tolerant properties are not performed because of distractions or other task demands. Recommendations based on the analysis for improving the error tolerance of the cockpit system are given.
Improved Quick Disconnect (QD) Interface Through Fail Safe Parts Identification
NASA Technical Reports Server (NTRS)
Blanch-Payne, Evelyn
2001-01-01
An extensive review of existing Quick Disconnects (QDs) mating and demating operations was performed to determine which shuttle part interface identifications and procedures contribute to human factor errors. The research methods used consisted of interviews with engineers and technicians, examination of incident reports, critiques of video and audio tapes of QD operations, and attendance of a Hyper QD operational course. The data strongly suggests that there are inherit human factor errors involved in QD operations. To promote fail-safe operations, QD interface problem areas and recommendations were outlined and reviewed. It is suggested that dialogue, investigations and recommendations continue.
Human Error In Complex Systems
NASA Technical Reports Server (NTRS)
Morris, Nancy M.; Rouse, William B.
1991-01-01
Report presents results of research aimed at understanding causes of human error in such complex systems as aircraft, nuclear powerplants, and chemical processing plants. Research considered both slips (errors of action) and mistakes (errors of intention), and influence of workload on them. Results indicated that: humans respond to conditions in which errors expected by attempting to reduce incidence of errors; and adaptation to conditions potent influence on human behavior in discretionary situations.
NASA Technical Reports Server (NTRS)
Alexander, Tiffaney Miller
2017-01-01
Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Safety within space exploration ground processing operations, the identification and/or classification of underlying contributors and causes of human error must be identified, in order to manage human error. This research provides a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.
NASA Technical Reports Server (NTRS)
Alexander, Tiffaney Miller
2017-01-01
Research results have shown that more than half of aviation, aerospace and aeronautics mishaps/incidents are attributed to human error. As a part of Safety within space exploration ground processing operations, the identification and/or classification of underlying contributors and causes of human error must be identified, in order to manage human error. This research provides a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.
NASA Technical Reports Server (NTRS)
Alexander, Tiffaney Miller
2017-01-01
Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Quality within space exploration ground processing operations, the identification and or classification of underlying contributors and causes of human error must be identified, in order to manage human error.This presentation will provide a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.
Effect of contrast on human speed perception
NASA Technical Reports Server (NTRS)
Stone, Leland S.; Thompson, Peter
1992-01-01
This study is part of an ongoing collaborative research effort between the Life Science and Human Factors Divisions at NASA ARC to measure the accuracy of human motion perception in order to predict potential errors in human perception/performance and to facilitate the design of display systems that minimize the effects of such deficits. The study describes how contrast manipulations can produce significant errors in human speed perception. Specifically, when two simultaneously presented parallel gratings are moving at the same speed within stationary windows, the lower-contrast grating appears to move more slowly. This contrast-induced misperception of relative speed is evident across a wide range of contrasts (2.5-50 percent) and does not appear to saturate (e.g., a 50 percent contrast grating appears slower than a 70 percent contrast grating moving at the same speed). The misperception is large: a 70 percent contrast grating must, on average, be slowed by 35 percent to match a 10 percent contrast grating moving at 2 deg/sec (N = 6). Furthermore, it is largely independent of the absolute contrast level and is a quasilinear function of log contrast ratio. A preliminary parametric study shows that, although spatial frequency has little effect, the relative orientation of the two gratings is important. Finally, the effect depends on the temporal presentation of the stimuli: the effects of contrast on perceived speed appears lessened when the stimuli to be matched are presented sequentially. These data constrain both physiological models of visual cortex and models of human performance. We conclude that viewing conditions that effect contrast, such as fog, may cause significant errors in speed judgments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bley, D.C.; Cooper, S.E.; Forester, J.A.
ATHEANA, a second-generation Human Reliability Analysis (HRA) method integrates advances in psychology with engineering, human factors, and Probabilistic Risk Analysis (PRA) disciplines to provide an HRA quantification process and PRA modeling interface that can accommodate and represent human performance in real nuclear power plant events. The method uses the characteristics of serious accidents identified through retrospective analysis of serious operational events to set priorities in a search process for significant human failure events, unsafe acts, and error-forcing context (unfavorable plant conditions combined with negative performance-shaping factors). ATHEANA has been tested in a demonstration project at an operating pressurized water reactor.
IRBs, conflict and liability: will we see IRBs in court? Or is it when?
Icenogle, Daniel L
2003-01-01
The entire human research infrastructure is under intense and increasing financial pressure. These pressures may have been responsible for several errors in judgment by those responsible for managing human research and protecting human subjects. The result of these errors has been some terrible accidents, some of which have cost the lives of human research volunteers. This, in turn, is producing both increased liability risk for those who manage the various aspects of human research and increasing scrutiny as to the capability of the human research protection structure as currently constituted. It is the author's contention that the current structure is fully capable of offering sufficient protection for participants in human research-if Institutional Review Board (IRB) staff and members are given sufficient resources and perform their tasks with sufficient responsibility. The status quo alternative is that IRBs and their members will find themselves at great risk of becoming defendants in lawsuits seeking compensation for damages resulting from human experimentation gone awry.
Active learning: learning a motor skill without a coach.
Huang, Vincent S; Shadmehr, Reza; Diedrichsen, Jörn
2008-08-01
When we learn a new skill (e.g., golf) without a coach, we are "active learners": we have to choose the specific components of the task on which to train (e.g., iron, driver, putter, etc.). What guides our selection of the training sequence? How do choices that people make compare with choices made by machine learning algorithms that attempt to optimize performance? We asked subjects to learn the novel dynamics of a robotic tool while moving it in four directions. They were instructed to choose their practice directions to maximize their performance in subsequent tests. We found that their choices were strongly influenced by motor errors: subjects tended to immediately repeat an action if that action had produced a large error. This strategy was correlated with better performance on test trials. However, even when participants performed perfectly on a movement, they did not avoid repeating that movement. The probability of repeating an action did not drop below chance even when no errors were observed. This behavior led to suboptimal performance. It also violated a strong prediction of current machine learning algorithms, which solve the active learning problem by choosing a training sequence that will maximally reduce the learner's uncertainty about the task. While we show that these algorithms do not provide an adequate description of human behavior, our results suggest ways to improve human motor learning by helping people choose an optimal training sequence.
Classification and reduction of pilot error
NASA Technical Reports Server (NTRS)
Rogers, W. H.; Logan, A. L.; Boley, G. D.
1989-01-01
Human error is a primary or contributing factor in about two-thirds of commercial aviation accidents worldwide. With the ultimate goal of reducing pilot error accidents, this contract effort is aimed at understanding the factors underlying error events and reducing the probability of certain types of errors by modifying underlying factors such as flight deck design and procedures. A review of the literature relevant to error classification was conducted. Classification includes categorizing types of errors, the information processing mechanisms and factors underlying them, and identifying factor-mechanism-error relationships. The classification scheme developed by Jens Rasmussen was adopted because it provided a comprehensive yet basic error classification shell or structure that could easily accommodate addition of details on domain-specific factors. For these purposes, factors specific to the aviation environment were incorporated. Hypotheses concerning the relationship of a small number of underlying factors, information processing mechanisms, and error types types identified in the classification scheme were formulated. ASRS data were reviewed and a simulation experiment was performed to evaluate and quantify the hypotheses.
Human operator response to error-likely situations in complex engineering systems
NASA Technical Reports Server (NTRS)
Morris, Nancy M.; Rouse, William B.
1988-01-01
The causes of human error in complex systems are examined. First, a conceptual framework is provided in which two broad categories of error are discussed: errors of action, or slips, and errors of intention, or mistakes. Conditions in which slips and mistakes might be expected to occur are identified, based on existing theories of human error. Regarding the role of workload, it is hypothesized that workload may act as a catalyst for error. Two experiments are presented in which humans' response to error-likely situations were examined. Subjects controlled PLANT under a variety of conditions and periodically provided subjective ratings of mental effort. A complex pattern of results was obtained, which was not consistent with predictions. Generally, the results of this research indicate that: (1) humans respond to conditions in which errors might be expected by attempting to reduce the possibility of error, and (2) adaptation to conditions is a potent influence on human behavior in discretionary situations. Subjects' explanations for changes in effort ratings are also explored.
Human error and human factors engineering in health care.
Welch, D L
1997-01-01
Human error is inevitable. It happens in health care systems as it does in all other complex systems, and no measure of attention, training, dedication, or punishment is going to stop it. The discipline of human factors engineering (HFE) has been dealing with the causes and effects of human error since the 1940's. Originally applied to the design of increasingly complex military aircraft cockpits, HFE has since been effectively applied to the problem of human error in such diverse systems as nuclear power plants, NASA spacecraft, the process control industry, and computer software. Today the health care industry is becoming aware of the costs of human error and is turning to HFE for answers. Just as early experimental psychologists went beyond the label of "pilot error" to explain how the design of cockpits led to air crashes, today's HFE specialists are assisting the health care industry in identifying the causes of significant human errors in medicine and developing ways to eliminate or ameliorate them. This series of articles will explore the nature of human error and how HFE can be applied to reduce the likelihood of errors and mitigate their effects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cournoyer, Michael E.; Trujillo, Stanley; Lawton, Cindy M.
A traditional view of incidents is that they are caused by shortcomings in human competence, attention, or attitude. It may be under the label of “loss of situational awareness,” procedure “violation,” or “poor” management. A different view is that human error is not the cause of failure, but a symptom of failure – trouble deeper inside the system. In this perspective, human error is not the conclusion, but rather the starting point of investigations. During an investigation, three types of information are gathered: physical, documentary, and human (recall/experience). Through the causal analysis process, apparent cause or apparent causes are identifiedmore » as the most probable cause or causes of an incident or condition that management has the control to fix and for which effective recommendations for corrective actions can be generated. A causal analysis identifies relevant human performance factors. In the following presentation, the anatomy of a radiological incident is discussed, and one case study is presented. We analyzed the contributing factors that caused a radiological incident. When underlying conditions, decisions, actions, and inactions that contribute to the incident are identified. This includes weaknesses that may warrant improvements that tolerate error. Measures that reduce consequences or likelihood of recurrence are discussed.« less
Cournoyer, Michael E.; Trujillo, Stanley; Lawton, Cindy M.; ...
2016-03-23
A traditional view of incidents is that they are caused by shortcomings in human competence, attention, or attitude. It may be under the label of “loss of situational awareness,” procedure “violation,” or “poor” management. A different view is that human error is not the cause of failure, but a symptom of failure – trouble deeper inside the system. In this perspective, human error is not the conclusion, but rather the starting point of investigations. During an investigation, three types of information are gathered: physical, documentary, and human (recall/experience). Through the causal analysis process, apparent cause or apparent causes are identifiedmore » as the most probable cause or causes of an incident or condition that management has the control to fix and for which effective recommendations for corrective actions can be generated. A causal analysis identifies relevant human performance factors. In the following presentation, the anatomy of a radiological incident is discussed, and one case study is presented. We analyzed the contributing factors that caused a radiological incident. When underlying conditions, decisions, actions, and inactions that contribute to the incident are identified. This includes weaknesses that may warrant improvements that tolerate error. Measures that reduce consequences or likelihood of recurrence are discussed.« less
Understanding Risk Tolerance and Building an Effective Safety Culture
NASA Technical Reports Server (NTRS)
Loyd, David
2018-01-01
Estimates range from 65-90 percent of catastrophic mishaps are due to human error. NASA's human factors-related mishaps causes are estimated at approximately 75 percent. As much as we'd like to error-proof our work environment, even the most automated and complex technical endeavors require human interaction... and are vulnerable to human frailty. Industry and government are focusing not only on human factors integration into hazardous work environments, but also looking for practical approaches to cultivating a strong Safety Culture that diminishes risk. Industry and government organizations have recognized the value of monitoring leading indicators to identify potential risk vulnerabilities. NASA has adapted this approach to assess risk controls associated with hazardous, critical, and complex facilities. NASA's facility risk assessments integrate commercial loss control, OSHA (Occupational Safety and Health Administration) Process Safety, API (American Petroleum Institute) Performance Indicator Standard, and NASA Operational Readiness Inspection concepts to identify risk control vulnerabilities.
ERIC Educational Resources Information Center
Yordanova, Juliana; Albrecht, Bjorn; Uebel, Henrik; Kirov, Roumen; Banaschewski, Tobias; Rothenberger, Aribert; Kolev, Vasil
2011-01-01
The maintenance of stable goal-directed behaviour is a hallmark of conscious executive control in humans. Notably, both correct and error human actions may have a subconscious activation-based determination. One possible source of subconscious interference may be the default mode network that, in contrast to attentional network, manifests…
Inspection error and its adverse effects - A model with implications for practitioners
NASA Technical Reports Server (NTRS)
Collins, R. D., Jr.; Case, K. E.; Bennett, G. K.
1978-01-01
Inspection error has clearly been shown to have adverse effects upon the results desired from a quality assurance sampling plan. These effects upon performance measures have been well documented from a statistical point of view. However, little work has been presented to convince the QC manager of the unfavorable cost consequences resulting from inspection error. This paper develops a very general, yet easily used, mathematical cost model. The basic format of the well-known Guthrie-Johns model is used. However, it is modified as required to assess the effects of attributes sampling errors of the first and second kind. The economic results, under different yet realistic conditions, will no doubt be of interest to QC practitioners who face similar problems daily. Sampling inspection plans are optimized to minimize economic losses due to inspection error. Unfortunately, any error at all results in some economic loss which cannot be compensated for by sampling plan design; however, improvements over plans which neglect the presence of inspection error are possible. Implications for human performance betterment programs are apparent, as are trade-offs between sampling plan modification and inspection and training improvements economics.
Groth, Katrina M.; Smith, Curtis L.; Swiler, Laura P.
2014-04-05
In the past several years, several international agencies have begun to collect data on human performance in nuclear power plant simulators [1]. This data provides a valuable opportunity to improve human reliability analysis (HRA), but there improvements will not be realized without implementation of Bayesian methods. Bayesian methods are widely used in to incorporate sparse data into models in many parts of probabilistic risk assessment (PRA), but Bayesian methods have not been adopted by the HRA community. In this article, we provide a Bayesian methodology to formally use simulator data to refine the human error probabilities (HEPs) assigned by existingmore » HRA methods. We demonstrate the methodology with a case study, wherein we use simulator data from the Halden Reactor Project to update the probability assignments from the SPAR-H method. The case study demonstrates the ability to use performance data, even sparse data, to improve existing HRA methods. Furthermore, this paper also serves as a demonstration of the value of Bayesian methods to improve the technical basis of HRA.« less
Zupanc, Christine M; Burgess-Limerick, Robin J; Wallis, Guy
2007-08-01
To investigate error and reaction time consequences of alternating compatible and incompatible steering arrangements during a simulated obstacle avoidance task. Underground coal mine shuttle cars provide an example of a vehicle in which operators are required to alternate between compatible and incompatible steering configurations. This experiment examines the performance of 48 novice participants in a virtual analogy of an underground coal mine shuttle car. Participants were randomly assigned to a compatible condition, an incompatible condition, an alternating condition in which compatibility alternated within and between hands, or an alternating condition in which compatibility alternated between hands. Participants made fewer steering direction errors and made correct steering responses more quickly in the compatible condition. Error rate decreased over time in the incompatible condition. A compatibility effect for both errors and reaction time was also found when the control-response relationship alternated; however, performance improvements over time were not consistent. Isolating compatibility to a hand resulted in reduced error rate and faster reaction time than when compatibility alternated within and between hands. The consequences of alternating control-response relationships are higher error rates and slower responses, at least in the early stages of learning. This research highlights the importance of ensuring consistently compatible human-machine directional control-response relationships.
Distribution of standing-wave errors in real-ear sound-level measurements.
Richmond, Susan A; Kopun, Judy G; Neely, Stephen T; Tan, Hongyang; Gorga, Michael P
2011-05-01
Standing waves can cause measurement errors when sound-pressure level (SPL) measurements are performed in a closed ear canal, e.g., during probe-microphone system calibration for distortion-product otoacoustic emission (DPOAE) testing. Alternative calibration methods, such as forward-pressure level (FPL), minimize the influence of standing waves by calculating the forward-going sound waves separate from the reflections that cause errors. Previous research compared test performance (Burke et al., 2010) and threshold prediction (Rogers et al., 2010) using SPL and multiple FPL calibration conditions, and surprisingly found no significant improvements when using FPL relative to SPL, except at 8 kHz. The present study examined the calibration data collected by Burke et al. and Rogers et al. from 155 human subjects in order to describe the frequency location and magnitude of standing-wave pressure minima to see if these errors might explain trends in test performance. Results indicate that while individual results varied widely, pressure variability was larger around 4 kHz and smaller at 8 kHz, consistent with the dimensions of the adult ear canal. The present data suggest that standing-wave errors are not responsible for the historically poor (8 kHz) or good (4 kHz) performance of DPOAE measures at specific test frequencies.
NASA Technical Reports Server (NTRS)
Wilmington, R. P.; Klute, Glenn K. (Editor); Carroll, Amy E. (Editor); Stuart, Mark A. (Editor); Poliner, Jeff (Editor); Rajulu, Sudhakar (Editor); Stanush, Julie (Editor)
1992-01-01
Kinematics, the study of motion exclusive of the influences of mass and force, is one of the primary methods used for the analysis of human biomechanical systems as well as other types of mechanical systems. The Anthropometry and Biomechanics Laboratory (ABL) in the Crew Interface Analysis section of the Man-Systems Division performs both human body kinematics as well as mechanical system kinematics using the Ariel Performance Analysis System (APAS). The APAS supports both analysis of analog signals (e.g. force plate data collection) as well as digitization and analysis of video data. The current evaluations address several methodology issues concerning the accuracy of the kinematic data collection and analysis used in the ABL. This document describes a series of evaluations performed to gain quantitative data pertaining to position and constant angular velocity movements under several operating conditions. Two-dimensional as well as three-dimensional data collection and analyses were completed in a controlled laboratory environment using typical hardware setups. In addition, an evaluation was performed to evaluate the accuracy impact due to a single axis camera offset. Segment length and positional data exhibited errors within 3 percent when using three-dimensional analysis and yielded errors within 8 percent through two-dimensional analysis (Direct Linear Software). Peak angular velocities displayed errors within 6 percent through three-dimensional analyses and exhibited errors of 12 percent when using two-dimensional analysis (Direct Linear Software). The specific results from this series of evaluations and their impacts on the methodology issues of kinematic data collection and analyses are presented in detail. The accuracy levels observed in these evaluations are also presented.
Economics of human performance and systems total ownership cost.
Onkham, Wilawan; Karwowski, Waldemar; Ahram, Tareq Z
2012-01-01
Financial costs of investing in people is associated with training, acquisition, recruiting, and resolving human errors have a significant impact on increased total ownership costs. These costs can also affect the exaggerate budgets and delayed schedules. The study of human performance economical assessment in the system acquisition process enhances the visibility of hidden cost drivers which support program management informed decisions. This paper presents the literature review of human total ownership cost (HTOC) and cost impacts on overall system performance. Economic value assessment models such as cost benefit analysis, risk-cost tradeoff analysis, expected value of utility function analysis (EV), growth readiness matrix, multi-attribute utility technique, and multi-regressions model were introduced to reflect the HTOC and human performance-technology tradeoffs in terms of the dollar value. The human total ownership regression model introduces to address the influencing human performance cost component measurement. Results from this study will increase understanding of relevant cost drivers in the system acquisition process over the long term.
An Evidential Reasoning-Based CREAM to Human Reliability Analysis in Maritime Accident Process.
Wu, Bing; Yan, Xinping; Wang, Yang; Soares, C Guedes
2017-10-01
This article proposes a modified cognitive reliability and error analysis method (CREAM) for estimating the human error probability in the maritime accident process on the basis of an evidential reasoning approach. This modified CREAM is developed to precisely quantify the linguistic variables of the common performance conditions and to overcome the problem of ignoring the uncertainty caused by incomplete information in the existing CREAM models. Moreover, this article views maritime accident development from the sequential perspective, where a scenario- and barrier-based framework is proposed to describe the maritime accident process. This evidential reasoning-based CREAM approach together with the proposed accident development framework are applied to human reliability analysis of a ship capsizing accident. It will facilitate subjective human reliability analysis in different engineering systems where uncertainty exists in practice. © 2017 Society for Risk Analysis.
Human error in airway facilities.
DOT National Transportation Integrated Search
2001-01-01
This report examines human errors in Airway Facilities (AF) with the intent of preventing these errors from being : passed on to the new Operations Control Centers. To effectively manage errors, they first have to be identified. : Human factors engin...
Koetsier, Antonie; Peek, Niels; de Keizer, Nicolette
2012-01-01
Errors may occur in the registration of in-hospital mortality, making it less reliable as a quality indicator. We assessed the types of errors made in in-hospital mortality registration in the clinical quality registry National Intensive Care Evaluation (NICE) by comparing its mortality data to data from a national insurance claims database. Subsequently, we performed site visits at eleven Intensive Care Units (ICUs) to investigate the number, types and causes of errors made in in-hospital mortality registration. A total of 255 errors were found in the NICE registry. Two different types of software malfunction accounted for almost 80% of the errors. The remaining 20% were five types of manual transcription errors and human failures to record outcome data. Clinical registries should be aware of the possible existence of errors in recorded outcome data and understand their causes. In order to prevent errors, we recommend to thoroughly verify the software that is used in the registration process.
Chiu, Ming-Chuan; Hsieh, Min-Chih
2016-05-01
The purposes of this study were to develop a latent human error analysis process, to explore the factors of latent human error in aviation maintenance tasks, and to provide an efficient improvement strategy for addressing those errors. First, we used HFACS and RCA to define the error factors related to aviation maintenance tasks. Fuzzy TOPSIS with four criteria was applied to evaluate the error factors. Results show that 1) adverse physiological states, 2) physical/mental limitations, and 3) coordination, communication, and planning are the factors related to airline maintenance tasks that could be addressed easily and efficiently. This research establishes a new analytic process for investigating latent human error and provides a strategy for analyzing human error using fuzzy TOPSIS. Our analysis process complements shortages in existing methodologies by incorporating improvement efficiency, and it enhances the depth and broadness of human error analysis methodology. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Suba, Eric J; Pfeifer, John D; Raab, Stephen S
2007-10-01
Patient identification errors in surgical pathology often involve switches of prostate or breast needle core biopsy specimens among patients. We assessed strategies for decreasing the occurrence of these uncommon and yet potentially catastrophic events. Root cause analyses were performed following 3 cases of patient identification error involving prostate needle core biopsy specimens. Patient identification errors in surgical pathology result from slips and lapses of automatic human action that may occur at numerous steps during pre-laboratory, laboratory and post-laboratory work flow processes. Patient identification errors among prostate needle biopsies may be difficult to entirely prevent through the optimization of work flow processes. A DNA time-out, whereby DNA polymorphic microsatellite analysis is used to confirm patient identification before radiation therapy or radical surgery, may eliminate patient identification errors among needle biopsies.
Detecting wrong notes in advance: neuronal correlates of error monitoring in pianists.
Ruiz, María Herrojo; Jabusch, Hans-Christian; Altenmüller, Eckart
2009-11-01
Music performance is an extremely rapid process with low incidence of errors even at the fast rates of production required. This is possible only due to the fast functioning of the self-monitoring system. Surprisingly, no specific data about error monitoring have been published in the music domain. Consequently, the present study investigated the electrophysiological correlates of executive control mechanisms, in particular error detection, during piano performance. Our target was to extend the previous research efforts on understanding of the human action-monitoring system by selecting a highly skilled multimodal task. Pianists had to retrieve memorized music pieces at a fast tempo in the presence or absence of auditory feedback. Our main interest was to study the interplay between auditory and sensorimotor information in the processes triggered by an erroneous action, considering only wrong pitches as errors. We found that around 70 ms prior to errors a negative component is elicited in the event-related potentials and is generated by the anterior cingulate cortex. Interestingly, this component was independent of the auditory feedback. However, the auditory information did modulate the processing of the errors after their execution, as reflected in a larger error positivity (Pe). Our data are interpreted within the context of feedforward models and the auditory-motor coupling.
The Development of a Modelling Solution to Address Manpower and Personnel Issues Using the IPME
2010-11-01
training for a military system. It deals with the number of personnel spaces and available people. One of the main concerns in this domain is to...are often addressed by examining existing solutions for similar systems and/or a trial-and-error method based on human-in- the -loop tests. Such an...significant effort and resources on the development of a human performance modelling software, the Integrated Performance Modelling Environment (IPME
Information systems and human error in the lab.
Bissell, Michael G
2004-01-01
Health system costs in clinical laboratories are incurred daily due to human error. Indeed, a major impetus for automating clinical laboratories has always been the opportunity it presents to simultaneously reduce cost and improve quality of operations by decreasing human error. But merely automating these processes is not enough. To the extent that introduction of these systems results in operators having less practice in dealing with unexpected events or becoming deskilled in problemsolving, however new kinds of error will likely appear. Clinical laboratories could potentially benefit by integrating findings on human error from modern behavioral science into their operations. Fully understanding human error requires a deep understanding of human information processing and cognition. Predicting and preventing negative consequences requires application of this understanding to laboratory operations. Although the occurrence of a particular error at a particular instant cannot be absolutely prevented, human error rates can be reduced. The following principles are key: an understanding of the process of learning in relation to error; understanding the origin of errors since this knowledge can be used to reduce their occurrence; optimal systems should be forgiving to the operator by absorbing errors, at least for a time; although much is known by industrial psychologists about how to write operating procedures and instructions in ways that reduce the probability of error, this expertise is hardly ever put to use in the laboratory; and a feedback mechanism must be designed into the system that enables the operator to recognize in real time that an error has occurred.
NASA Astrophysics Data System (ADS)
Rahim, Ahmad Nabil Bin Ab; Mohamed, Faizal; Farid, Mohd Fairus Abdul; Fazli Zakaria, Mohd; Sangau Ligam, Alfred; Ramli, Nurhayati Binti
2018-01-01
Human factor can be affected by prevalence stress measured using Depression, Anxiety and Stress Scale (DASS). From the respondents feedback can be summarized that the main factor causes the highest prevalence stress is due to the working conditions that require operators to handle critical situation and make a prompt critical decisions. The relationship between the prevalence stress and performance shaping factors found that PSFFitness and PSFWork Process showed positive Pearson’s Correlation with the score of .763 and .826 while the level of significance, p = .028 and p = .012. These positive correlations with good significant values between prevalence stress and human performance shaping factor (PSF) related to fitness, work processes and procedures. The higher the stress level of the respondents, the higher the score of selected for the PSFs. This is due to the higher levels of stress lead to deteriorating physical health and cognitive also worsened. In addition, the lack of understanding in the work procedures can also be a factor that causes a growing stress. The higher these values will lead to the higher the probabilities of human error occur. Thus, monitoring the level of stress among operators RTP is important to ensure the safety of RTP.
Brzozek, Christopher; Benke, Kurt K; Zeleke, Berihun M; Abramson, Michael J; Benke, Geza
2018-03-26
Uncertainty in experimental studies of exposure to radiation from mobile phones has in the past only been framed within the context of statistical variability. It is now becoming more apparent to researchers that epistemic or reducible uncertainties can also affect the total error in results. These uncertainties are derived from a wide range of sources including human error, such as data transcription, model structure, measurement and linguistic errors in communication. The issue of epistemic uncertainty is reviewed and interpreted in the context of the MoRPhEUS, ExPOSURE and HERMES cohort studies which investigate the effect of radiofrequency electromagnetic radiation from mobile phones on memory performance. Research into this field has found inconsistent results due to limitations from a range of epistemic sources. Potential analytic approaches are suggested based on quantification of epistemic error using Monte Carlo simulation. It is recommended that future studies investigating the relationship between radiofrequency electromagnetic radiation and memory performance pay more attention to treatment of epistemic uncertainties as well as further research into improving exposure assessment. Use of directed acyclic graphs is also encouraged to display the assumed covariate relationship.
10 CFR 50.73 - Licensee event report system.
Code of Federal Regulations, 2010 CFR
2010-01-01
... plant design; or (2) Normal and expected wear or degradation. (x) Any event that posed an actual threat... discovery of each component or system failure or procedural error. (J) For each human performance related...
Engineering Data Compendium. Human Perception and Performance. Volume 1
1988-01-01
1986 by John Wiley & Sons, Inc. Reprinted with permission. 3.310, Fig. I: From B. Kraske & M. Crawshaw , Differential errors of kin- esthesis...1973). Influence of after-move- (1974). Muscular and joint-recep- 1. Clark, F. J.,&Horch, K. W. 2. Craske B. & Crawshaw M. ment on muscle memory...Experimental Psychol- ogy, 90, 287-299. *2. Craske, B., & Crawshaw , M. (1974). Differential errors of kines- thesis produced by previous limb position
Dopamine Modulates Adaptive Prediction Error Coding in the Human Midbrain and Striatum.
Diederen, Kelly M J; Ziauddeen, Hisham; Vestergaard, Martin D; Spencer, Tom; Schultz, Wolfram; Fletcher, Paul C
2017-02-15
Learning to optimally predict rewards requires agents to account for fluctuations in reward value. Recent work suggests that individuals can efficiently learn about variable rewards through adaptation of the learning rate, and coding of prediction errors relative to reward variability. Such adaptive coding has been linked to midbrain dopamine neurons in nonhuman primates, and evidence in support for a similar role of the dopaminergic system in humans is emerging from fMRI data. Here, we sought to investigate the effect of dopaminergic perturbations on adaptive prediction error coding in humans, using a between-subject, placebo-controlled pharmacological fMRI study with a dopaminergic agonist (bromocriptine) and antagonist (sulpiride). Participants performed a previously validated task in which they predicted the magnitude of upcoming rewards drawn from distributions with varying SDs. After each prediction, participants received a reward, yielding trial-by-trial prediction errors. Under placebo, we replicated previous observations of adaptive coding in the midbrain and ventral striatum. Treatment with sulpiride attenuated adaptive coding in both midbrain and ventral striatum, and was associated with a decrease in performance, whereas bromocriptine did not have a significant impact. Although we observed no differential effect of SD on performance between the groups, computational modeling suggested decreased behavioral adaptation in the sulpiride group. These results suggest that normal dopaminergic function is critical for adaptive prediction error coding, a key property of the brain thought to facilitate efficient learning in variable environments. Crucially, these results also offer potential insights for understanding the impact of disrupted dopamine function in mental illness. SIGNIFICANCE STATEMENT To choose optimally, we have to learn what to expect. Humans dampen learning when there is a great deal of variability in reward outcome, and two brain regions that are modulated by the brain chemical dopamine are sensitive to reward variability. Here, we aimed to directly relate dopamine to learning about variable rewards, and the neural encoding of associated teaching signals. We perturbed dopamine in healthy individuals using dopaminergic medication and asked them to predict variable rewards while we made brain scans. Dopamine perturbations impaired learning and the neural encoding of reward variability, thus establishing a direct link between dopamine and adaptation to reward variability. These results aid our understanding of clinical conditions associated with dopaminergic dysfunction, such as psychosis. Copyright © 2017 Diederen et al.
Stochastic Models of Human Errors
NASA Technical Reports Server (NTRS)
Elshamy, Maged; Elliott, Dawn M. (Technical Monitor)
2002-01-01
Humans play an important role in the overall reliability of engineering systems. More often accidents and systems failure are traced to human errors. Therefore, in order to have meaningful system risk analysis, the reliability of the human element must be taken into consideration. Describing the human error process by mathematical models is a key to analyzing contributing factors. Therefore, the objective of this research effort is to establish stochastic models substantiated by sound theoretic foundation to address the occurrence of human errors in the processing of the space shuttle.
Operational Interventions to Maintenance Error
NASA Technical Reports Server (NTRS)
Kanki, Barbara G.; Walter, Diane; Dulchinos, VIcki
1997-01-01
A significant proportion of aviation accidents and incidents are known to be tied to human error. However, research of flight operational errors has shown that so-called pilot error often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the team7 concept for maintenance operations and in tailoring programs to fit the needs of technical opeRAtions. Nevertheless, there remains a dual challenge: 1) to develop human factors interventions which are directly supported by reliable human error data, and 2) to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.
Reduction of Maintenance Error Through Focused Interventions
NASA Technical Reports Server (NTRS)
Kanki, Barbara G.; Walter, Diane; Rosekind, Mark R. (Technical Monitor)
1997-01-01
It is well known that a significant proportion of aviation accidents and incidents are tied to human error. In flight operations, research of operational errors has shown that so-called "pilot error" often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the "team" concept for maintenance operations and in tailoring programs to fit the needs of technical operations. Nevertheless, there remains a dual challenge: to develop human factors interventions which are directly supported by reliable human error data, and to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.
The Automated Assessment of Postural Stability: Balance Detection Algorithm.
Napoli, Alessandro; Glass, Stephen M; Tucker, Carole; Obeid, Iyad
2017-12-01
Impaired balance is a common indicator of mild traumatic brain injury, concussion and musculoskeletal injury. Given the clinical relevance of such injuries, especially in military settings, it is paramount to develop more accurate and reliable on-field evaluation tools. This work presents the design and implementation of the automated assessment of postural stability (AAPS) system, for on-field evaluations following concussion. The AAPS is a computer system, based on inexpensive off-the-shelf components and custom software, that aims to automatically and reliably evaluate balance deficits, by replicating a known on-field clinical test, namely, the Balance Error Scoring System (BESS). The AAPS main innovation is its balance error detection algorithm that has been designed to acquire data from a Microsoft Kinect ® sensor and convert them into clinically-relevant BESS scores, using the same detection criteria defined by the original BESS test. In order to assess the AAPS balance evaluation capability, a total of 15 healthy subjects (7 male, 8 female) were required to perform the BESS test, while simultaneously being tracked by a Kinect 2.0 sensor and a professional-grade motion capture system (Qualisys AB, Gothenburg, Sweden). High definition videos with BESS trials were scored off-line by three experienced observers for reference scores. AAPS performance was assessed by comparing the AAPS automated scores to those derived by three experienced observers. Our results show that the AAPS error detection algorithm presented here can accurately and precisely detect balance deficits with performance levels that are comparable to those of experienced medical personnel. Specifically, agreement levels between the AAPS algorithm and the human average BESS scores ranging between 87.9% (single-leg on foam) and 99.8% (double-leg on firm ground) were detected. Moreover, statistically significant differences in balance scores were not detected by an ANOVA test with alpha equal to 0.05. Despite some level of disagreement between human and AAPS-generated scores, the use of an automated system yields important advantages over currently available human-based alternatives. These results underscore the value of using the AAPS, that can be quickly deployed in the field and/or in outdoor settings with minimal set-up time. Finally, the AAPS can record multiple error types and their time course with extremely high temporal resolution. These features are not achievable by humans, who cannot keep track of multiple balance errors with such a high resolution. Together, these results suggest that computerized BESS calculation may provide more accurate and consistent measures of balance than those derived from human experts.
Artificial intelligence in medicine: humans need not apply?
Diprose, William; Buist, Nicholas
2016-05-06
Artificial intelligence (AI) is a rapidly growing field with a wide range of applications. Driven by economic constraints and the potential to reduce human error, we believe that over the coming years AI will perform a significant amount of the diagnostic and treatment decision-making traditionally performed by the doctor. Humans would continue to be an important part of healthcare delivery, but in many situations, less expensive fit-for-purpose healthcare workers could be trained to 'fill the gaps' where AI are less capable. As a result, the role of the doctor as an expensive problem-solver would become redundant.
Study of style effects on OCR errors in the MEDLINE database
NASA Astrophysics Data System (ADS)
Garrison, Penny; Davis, Diane L.; Andersen, Tim L.; Barney Smith, Elisa H.
2005-01-01
The National Library of Medicine has developed a system for the automatic extraction of data from scanned journal articles to populate the MEDLINE database. Although the 5-engine OCR system used in this process exhibits good performance overall, it does make errors in character recognition that must be corrected in order for the process to achieve the requisite accuracy. The correction process works by feeding words that have characters with less than 100% confidence (as determined automatically by the OCR engine) to a human operator who then must manually verify the word or correct the error. The majority of these errors are contained in the affiliation information zone where the characters are in italics or small fonts. Therefore only affiliation information data is used in this research. This paper examines the correlation between OCR errors and various character attributes in the MEDLINE database, such as font size, italics, bold, etc. and OCR confidence levels. The motivation for this research is that if a correlation between the character style and types of errors exists it should be possible to use this information to improve operator productivity by increasing the probability that the correct word option is presented to the human editor. We have determined that this correlation exists, in particular for the case of characters with diacritics.
CBP for Field Workers – Results and Insights from Three Usability and Interface Design Evaluations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oxstrand, Johanna Helene; Le Blanc, Katya Lee; Bly, Aaron Douglas
2015-09-01
Nearly all activities that involve human interaction with the systems in a nuclear power plant are guided by procedures. Even though the paper-based procedures (PBPs) currently used by industry have a demonstrated history of ensuring safety, improving procedure use could yield significant savings in increased efficiency as well as improved nuclear safety through human performance gains. The nuclear industry is constantly trying to find ways to decrease the human error rate, especially the human errors associated with procedure use. As a step toward the goal of improving procedure use and adherence, researchers in the Light-Water Reactor Sustainability (LWRS) Program, togethermore » with the nuclear industry, have been investigating the possibility and feasibility of replacing the current paper-based procedure process with a computer-based procedure (CBP) system. This report describes a field evaluation of new design concepts of a prototype computer-based procedure system.« less
Liu, Sheena Xin; Gutiérrez, Luis F; Stanton, Doug
2011-05-01
Electromagnetic (EM)-guided endoscopy has demonstrated its value in minimally invasive interventions. Accuracy evaluation of the system is of paramount importance to clinical applications. Previously, a number of researchers have reported the results of calibrating the EM-guided endoscope; however, the accumulated errors of an integrated system, which ultimately reflect intra-operative performance, have not been characterized. To fill this vacancy, we propose a novel system to perform this evaluation and use a 3D metric to reflect the intra-operative procedural accuracy. This paper first presents a portable design and a method for calibration of an electromagnetic (EM)-tracked endoscopy system. An evaluation scheme is then described that uses the calibration results and EM-CT registration to enable real-time data fusion between CT and endoscopic video images. We present quantitative evaluation results for estimating the accuracy of this system using eight internal fiducials as the targets on an anatomical phantom: the error is obtained by comparing the positions of these targets in the CT space, EM space and endoscopy image space. To obtain 3D error estimation, the 3D locations of the targets in the endoscopy image space are reconstructed from stereo views of the EM-tracked monocular endoscope. Thus, the accumulated errors are evaluated in a controlled environment, where the ground truth information is present and systematic performance (including the calibration error) can be assessed. We obtain the mean in-plane error to be on the order of 2 pixels. To evaluate the data integration performance for virtual navigation, target video-CT registration error (TRE) is measured as the 3D Euclidean distance between the 3D-reconstructed targets of endoscopy video images and the targets identified in CT. The 3D error (TRE) encapsulates EM-CT registration error, EM-tracking error, fiducial localization error, and optical-EM calibration error. We present in this paper our calibration method and a virtual navigation evaluation system for quantifying the overall errors of the intra-operative data integration. We believe this phantom not only offers us good insights to understand the systematic errors encountered in all phases of an EM-tracked endoscopy procedure but also can provide quality control of laboratory experiments for endoscopic procedures before the experiments are transferred from the laboratory to human subjects.
NASA Technical Reports Server (NTRS)
Diorio, Kimberly A.; Voska, Ned (Technical Monitor)
2002-01-01
This viewgraph presentation provides information on Human Factors Process Failure Modes and Effects Analysis (HF PFMEA). HF PFMEA includes the following 10 steps: Describe mission; Define System; Identify human-machine; List human actions; Identify potential errors; Identify factors that effect error; Determine likelihood of error; Determine potential effects of errors; Evaluate risk; Generate solutions (manage error). The presentation also describes how this analysis was applied to a liquid oxygen pump acceptance test.
NASA Technical Reports Server (NTRS)
Mcruer, D. T.; Clement, W. F.; Allen, R. W.
1981-01-01
Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.
Likitlersuang, Jirapat; Leineweber, Matthew J; Andrysek, Jan
2017-10-01
Thin film force sensors are commonly used within biomechanical systems, and at the interface of the human body and medical and non-medical devices. However, limited information is available about their performance in such applications. The aims of this study were to evaluate and determine ways to improve the performance of thin film (FlexiForce) sensors at the body/device interface. Using a custom apparatus designed to load the sensors under simulated body/device conditions, two aspects were explored relating to sensor calibration and application. The findings revealed accuracy errors of 23.3±17.6% for force measurements at the body/device interface with conventional techniques of sensor calibration and application. Applying a thin rigid disc between the sensor and human body and calibrating the sensor using compliant surfaces was found to substantially reduce measurement errors to 2.9±2.0%. The use of alternative calibration and application procedures is recommended to gain acceptable measurement performance from thin film force sensors in body/device applications. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
New Integrated Modeling Capabilities: MIDAS' Recent Behavioral Enhancements
NASA Technical Reports Server (NTRS)
Gore, Brian F.; Jarvis, Peter A.
2005-01-01
The Man-machine Integration Design and Analysis System (MIDAS) is an integrated human performance modeling software tool that is based on mechanisms that underlie and cause human behavior. A PC-Windows version of MIDAS has been created that integrates the anthropometric character "Jack (TM)" with MIDAS' validated perceptual and attention mechanisms. MIDAS now models multiple simulated humans engaging in goal-related behaviors. New capabilities include the ability to predict situations in which errors and/or performance decrements are likely due to a variety of factors including concurrent workload and performance influencing factors (PIFs). This paper describes a new model that predicts the effects of microgravity on a mission specialist's performance, and its first application to simulating the task of conducting a Life Sciences experiment in space according to a sequential or parallel schedule of performance.
Souvestre, P A; Landrock, C K; Blaber, A P
2008-08-01
Human factors centered aviation accident analyses report that skill based errors are known to be cause of 80% of all accidents, decision making related errors 30% and perceptual errors 6%1. In-flight decision making error is a long time recognized major avenue leading to incidents and accidents. Through the past three decades, tremendous and costly efforts have been developed to attempt to clarify causation, roles and responsibility as well as to elaborate various preventative and curative countermeasures blending state of the art biomedical, technological advances and psychophysiological training strategies. In-flight related statistics have not been shown significantly changed and a significant number of issues remain not yet resolved. Fine Postural System and its corollary, Postural Deficiency Syndrome (PDS), both defined in the 1980's, are respectively neurophysiological and medical diagnostic models that reflect central neural sensory-motor and cognitive controls regulatory status. They are successfully used in complex neurotraumatology and related rehabilitation for over two decades. Analysis of clinical data taken over a ten-year period from acute and chronic post-traumatic PDS patients shows a strong correlation between symptoms commonly exhibited before, along side, or even after error, and sensory-motor or PDS related symptoms. Examples are given on how PDS related central sensory-motor control dysfunction can be correctly identified and monitored via a neurophysiological ocular-vestibular-postural monitoring system. The data presented provides strong evidence that a specific biomedical assessment methodology can lead to a better understanding of in-flight adaptive neurophysiological, cognitive and perceptual dysfunctional status that could induce in flight-errors. How relevant human factors can be identified and leveraged to maintain optimal performance will be addressed.
NASA Technical Reports Server (NTRS)
Mcruer, D. T.; Clement, W. F.; Allen, R. W.
1980-01-01
Human error, a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents is investigated. Correction of the sources of human error requires that one attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations is presented. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.
Simulating Human Cognition in the Domain of Air Traffic Control
NASA Technical Reports Server (NTRS)
Freed, Michael; Johnston, James C.; Null, Cynthia H. (Technical Monitor)
1995-01-01
Experiments intended to assess performance in human-machine interactions are often prohibitively expensive, unethical or otherwise impractical to run. Approximations of experimental results can be obtained, in principle, by simulating the behavior of subjects using computer models of human mental behavior. Computer simulation technology has been developed for this purpose. Our goal is to produce a cognitive model suitable to guide the simulation machinery and enable it to closely approximate a human subject's performance in experimental conditions. The described model is designed to simulate a variety of cognitive behaviors involved in routine air traffic control. As the model is elaborated, our ability to predict the effects of novel circumstances on controller error rates and other performance characteristics should increase. This will enable the system to project the impact of proposed changes to air traffic control procedures and equipment on controller performance.
NASA Technical Reports Server (NTRS)
Poliner, Jeffrey; Fletcher, Lauren; Klute, Glenn K.
1994-01-01
Video-based motion analysis systems are widely employed to study human movement, using computers to capture, store, process, and analyze video data. This data can be collected in any environment where cameras can be located. One of the NASA facilities where human performance research is conducted is the Weightless Environment Training Facility (WETF), a pool of water which simulates zero-gravity with neutral buoyance. Underwater video collection in the WETF poses some unique problems. This project evaluates the error caused by the lens distortion of the WETF cameras. A grid of points of known dimensions was constructed and videotaped using a video vault underwater system. Recorded images were played back on a VCR and a personal computer grabbed and stored the images on disk. These images were then digitized to give calculated coordinates for the grid points. Errors were calculated as the distance from the known coordinates of the points to the calculated coordinates. It was demonstrated that errors from lens distortion could be as high as 8 percent. By avoiding the outermost regions of a wide-angle lens, the error can be kept smaller.
De Sá Teixeira, Nuno Alexandre
2014-12-01
Given its conspicuous nature, gravity has been acknowledged by several research lines as a prime factor in structuring the spatial perception of one's environment. One such line of enquiry has focused on errors in spatial localization aimed at the vanishing location of moving objects - it has been systematically reported that humans mislocalize spatial positions forward, in the direction of motion (representational momentum) and downward in the direction of gravity (representational gravity). Moreover, spatial localization errors were found to evolve dynamically with time in a pattern congruent with an anticipated trajectory (representational trajectory). The present study attempts to ascertain the degree to which vestibular information plays a role in these phenomena. Human observers performed a spatial localization task while tilted to varying degrees and referring to the vanishing locations of targets moving along several directions. A Fourier decomposition of the obtained spatial localization errors revealed that although spatial errors were increased "downward" mainly along the body's longitudinal axis (idiotropic dominance), the degree of misalignment between the latter and physical gravity modulated the time course of the localization responses. This pattern is surmised to reflect increased uncertainty about the internal model when faced with conflicting cues regarding the perceived "downward" direction.
Impact of an antiretroviral stewardship strategy on medication error rates.
Shea, Katherine M; Hobbs, Athena Lv; Shumake, Jason D; Templet, Derek J; Padilla-Tolentino, Eimeira; Mondy, Kristin E
2018-05-02
The impact of an antiretroviral stewardship strategy on medication error rates was evaluated. This single-center, retrospective, comparative cohort study included patients at least 18 years of age infected with human immunodeficiency virus (HIV) who were receiving antiretrovirals and admitted to the hospital. A multicomponent approach was developed and implemented and included modifications to the order-entry and verification system, pharmacist education, and a pharmacist-led antiretroviral therapy checklist. Pharmacists performed prospective audits using the checklist at the time of order verification. To assess the impact of the intervention, a retrospective review was performed before and after implementation to assess antiretroviral errors. Totals of 208 and 24 errors were identified before and after the intervention, respectively, resulting in a significant reduction in the overall error rate ( p < 0.001). In the postintervention group, significantly lower medication error rates were found in both patient admissions containing at least 1 medication error ( p < 0.001) and those with 2 or more errors ( p < 0.001). Significant reductions were also identified in each error type, including incorrect/incomplete medication regimen, incorrect dosing regimen, incorrect renal dose adjustment, incorrect administration, and the presence of a major drug-drug interaction. A regression tree selected ritonavir as the only specific medication that best predicted more errors preintervention ( p < 0.001); however, no antiretrovirals reliably predicted errors postintervention. An antiretroviral stewardship strategy for hospitalized HIV patients including prospective audit by staff pharmacists through use of an antiretroviral medication therapy checklist at the time of order verification decreased error rates. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
Contextual specificity in perception and action
NASA Technical Reports Server (NTRS)
Proffitt, Dennis R.
1991-01-01
The visually guided control of helicopter flight is a human achievement, and, thus, understanding this skill is, in part, a psychological problem. The abilities of skilled pilots are impressive, and yet it is of concern that pilots' performance is less than ideal: they suffer from workload constraints, make occasional errors, and are subject to such debilities as simulator sickness. Remedying such deficiencies is both an engineering and a psychological problem. When studying the psychological aspects of this problem, it is desirable to simplify the problem as much as possible, and thereby, sidestep as many intractable psychological issues as possible. Simply stated, we do not want to have to resolve such polemics as the mind-body problem in order to contribute to the design of more effective helicopter systems. On the other hand, the study of human behavior is a psychological endeavor and certain problems cannot be evaded. Four related issues that are of psychological significance in understanding the visually guided control of helicopter flight are discussed. First, a selected discussion of the nature of descriptive levels in analyzing human perception and performance is presented. It is argued that the appropriate level of description for perception is kinematical, and for performance, it is procedural. Second, it is argued that investigations into pilot performance cannot ignore the nature of pilots' phenomenal experience. The conscious control of actions is not based upon environmental states of affairs, nor upon the optical information that specifies them. Actions are coupled to perceptions. Third, the acquisition of skilled actions in the context of inherent misperceptions is discussed. Such skills may be error prone in some situations, but not in others. Finally, I discuss the contextual relativity of human errors. Each of these four issues relates to a common theme: the control of action is mediated by phenomenal experience, the veracity of which is context specific.
Human Reliability and the Cost of Doing Business
NASA Technical Reports Server (NTRS)
DeMott, D. L.
2014-01-01
Human error cannot be defined unambiguously in advance of it happening, it often becomes an error after the fact. The same action can result in a tragic accident for one situation or a heroic action given a more favorable outcome. People often forget that we employ humans in business and industry for the flexibility and capability to change when needed. In complex systems, operations are driven by their specifications of the system and the system structure. People provide the flexibility to make it work. Human error has been reported as being responsible for 60%-80% of failures, accidents and incidents in high-risk industries. We don't have to accept that all human errors are inevitable. Through the use of some basic techniques, many potential human error events can be addressed. There are actions that can be taken to reduce the risk of human error.
Yazmir, Boris; Reiner, Miriam
2018-05-15
Any motor action is, by nature, potentially accompanied by human errors. In order to facilitate development of error-tailored Brain-Computer Interface (BCI) correction systems, we focused on internal, human-initiated errors, and investigated EEG correlates of user outcome successes and errors during a continuous 3D virtual tennis game against a computer player. We used a multisensory, 3D, highly immersive environment. Missing and repelling the tennis ball were considered, as 'error' (miss) and 'success' (repel). Unlike most previous studies, where the environment "encouraged" the participant to perform a mistake, here errors happened naturally, resulting from motor-perceptual-cognitive processes of incorrect estimation of the ball kinematics, and can be regarded as user internal, self-initiated errors. Results show distinct and well-defined Event-Related Potentials (ERPs), embedded in the ongoing EEG, that differ across conditions by waveforms, scalp signal distribution maps, source estimation results (sLORETA) and time-frequency patterns, establishing a series of typical features that allow valid discrimination between user internal outcome success and error. The significant delay in latency between positive peaks of error- and success-related ERPs, suggests a cross-talk between top-down and bottom-up processing, represented by an outcome recognition process, in the context of the game world. Success-related ERPs had a central scalp distribution, while error-related ERPs were centro-parietal. The unique characteristics and sharp differences between EEG correlates of error/success provide the crucial components for an improved BCI system. The features of the EEG waveform can be used to detect user action outcome, to be fed into the BCI correction system. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.
Unreliable numbers: error and harm induced by bad design can be reduced by better design
Thimbleby, Harold; Oladimeji, Patrick; Cairns, Paul
2015-01-01
Number entry is a ubiquitous activity and is often performed in safety- and mission-critical procedures, such as healthcare, science, finance, aviation and in many other areas. We show that Monte Carlo methods can quickly and easily compare the reliability of different number entry systems. A surprising finding is that many common, widely used systems are defective, and induce unnecessary human error. We show that Monte Carlo methods enable designers to explore the implications of normal and unexpected operator behaviour, and to design systems to be more resilient to use error. We demonstrate novel designs with improved resilience, implying that the common problems identified and the errors they induce are avoidable. PMID:26354830
Challenges in leveraging existing human performance data for quantifying the IDHEAS HRA method
Liao, Huafei N.; Groth, Katrina; Stevens-Adams, Susan
2015-07-29
Our article documents an exploratory study for collecting and using human performance data to inform human error probability (HEP) estimates for a new human reliability analysis (HRA) method, the IntegrateD Human Event Analysis System (IDHEAS). The method was based on cognitive models and mechanisms underlying human behaviour and employs a framework of 14 crew failure modes (CFMs) to represent human failures typical for human performance in nuclear power plant (NPP) internal, at-power events [1]. A decision tree (DT) was constructed for each CFM to assess the probability of the CFM occurring in different contexts. Data needs for IDHEAS quantification aremore » discussed. Then, the data collection framework and process is described and how the collected data were used to inform HEP estimation is illustrated with two examples. Next, five major technical challenges are identified for leveraging human performance data for IDHEAS quantification. Furthermore, these challenges reflect the data needs specific to IDHEAS. More importantly, they also represent the general issues with current human performance data and can provide insight for a path forward to support HRA data collection, use, and exchange for HRA method development, implementation, and validation.« less
Impact of Standardized Communication Techniques on Errors during Simulated Neonatal Resuscitation.
Yamada, Nicole K; Fuerch, Janene H; Halamek, Louis P
2016-03-01
Current patterns of communication in high-risk clinical situations, such as resuscitation, are imprecise and prone to error. We hypothesized that the use of standardized communication techniques would decrease the errors committed by resuscitation teams during neonatal resuscitation. In a prospective, single-blinded, matched pairs design with block randomization, 13 subjects performed as a lead resuscitator in two simulated complex neonatal resuscitations. Two nurses assisted each subject during the simulated resuscitation scenarios. In one scenario, the nurses used nonstandard communication; in the other, they used standardized communication techniques. The performance of the subjects was scored to determine errors committed (defined relative to the Neonatal Resuscitation Program algorithm), time to initiation of positive pressure ventilation (PPV), and time to initiation of chest compressions (CC). In scenarios in which subjects were exposed to standardized communication techniques, there was a trend toward decreased error rate, time to initiation of PPV, and time to initiation of CC. While not statistically significant, there was a 1.7-second improvement in time to initiation of PPV and a 7.9-second improvement in time to initiation of CC. Should these improvements in human performance be replicated in the care of real newborn infants, they could improve patient outcomes and enhance patient safety. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Adaptive Automation and Cue Invocation: The Effect of Cue Timing on Operator Error
2013-05-01
129. 5. Parasuraman, R. (2000). Designing automation for human use: Empirical studies and quantitative models. Ergonomics , 43, 931-951. 6...Prospective memory errors involve memory for intended actions that are planned to be performed at some designated point in the future [20]. In the DMOO...RESCHU) [21] was used in this study. A Navy pilot who is familiar with supervisory control tasks designed the RESCHU task and the task has been
SDI Software Technology Program Plan Version 1.5
1987-06-01
computer generation of auditory communication of meaningful speech. Most speech synthesizers are based on mathematical models of the human vocal tract, but...oral/ auditory and multimodal communications. Although such state-of-the-art interaction technology has not fully matured, user experience has...superior I pattern matching capabilities and the subliminal intuitive deduction capability. The error performance of humans can be helped by careful
Managing human fallibility in critical aerospace situations
NASA Astrophysics Data System (ADS)
Tew, Larry
2014-11-01
Human fallibility is pervasive in the aerospace industry with over 50% of errors attributed to human error. Consider the benefits to any organization if those errors were significantly reduced. Aerospace manufacturing involves high value, high profile systems with significant complexity and often repetitive build, assembly, and test operations. In spite of extensive analysis, planning, training, and detailed procedures, human factors can cause unexpected errors. Handling such errors involves extensive cause and corrective action analysis and invariably schedule slips and cost growth. We will discuss success stories, including those associated with electro-optical systems, where very significant reductions in human fallibility errors were achieved after receiving adapted and specialized training. In the eyes of company and customer leadership, the steps used to achieve these results lead to in a major culture change in both the workforce and the supporting management organization. This approach has proven effective in other industries like medicine, firefighting, law enforcement, and aviation. The roadmap to success and the steps to minimize human error are known. They can be used by any organization willing to accept human fallibility and take a proactive approach to incorporate the steps needed to manage and minimize error.
Differing Air Traffic Controller Responses to Similar Trajectory Prediction Errors
NASA Technical Reports Server (NTRS)
Mercer, Joey; Hunt-Espinosa, Sarah; Bienert, Nancy; Laraway, Sean
2016-01-01
A Human-In-The-Loop simulation was conducted in January of 2013 in the Airspace Operations Laboratory at NASA's Ames Research Center. The simulation airspace included two en route sectors feeding the northwest corner of Atlanta's Terminal Radar Approach Control. The focus of this paper is on how uncertainties in the study's trajectory predictions impacted the controllers ability to perform their duties. Of particular interest is how the controllers interacted with the delay information displayed in the meter list and data block while managing the arrival flows. Due to wind forecasts with 30-knot over-predictions and 30-knot under-predictions, delay value computations included errors of similar magnitude, albeit in opposite directions. However, when performing their duties in the presence of these errors, did the controllers issue clearances of similar magnitude, albeit in opposite directions?
Defining the Relationship Between Human Error Classes and Technology Intervention Strategies
NASA Technical Reports Server (NTRS)
Wiegmann, Douglas A.; Rantanen, Eas M.
2003-01-01
The modus operandi in addressing human error in aviation systems is predominantly that of technological interventions or fixes. Such interventions exhibit considerable variability both in terms of sophistication and application. Some technological interventions address human error directly while others do so only indirectly. Some attempt to eliminate the occurrence of errors altogether whereas others look to reduce the negative consequences of these errors. In any case, technological interventions add to the complexity of the systems and may interact with other system components in unforeseeable ways and often create opportunities for novel human errors. Consequently, there is a need to develop standards for evaluating the potential safety benefit of each of these intervention products so that resources can be effectively invested to produce the biggest benefit to flight safety as well as to mitigate any adverse ramifications. The purpose of this project was to help define the relationship between human error and technological interventions, with the ultimate goal of developing a set of standards for evaluating or measuring the potential benefits of new human error fixes.
High altitude cognitive performance and COPD interaction
Kourtidou-Papadeli, C; Papadelis, C; Koutsonikolas, D; Boutzioukas, S; Styliadis, C; Guiba-Tziampiri, O
2008-01-01
Introduction: Thousands of people work and perform everyday in high altitude environment, either as pilots, or shift workers, or mountaineers. The problem is that most of the accidents in this environment have been attributed to human error. The objective of this study was to assess complex cognitive performance as it interacts with respiratory insufficiency at altitudes of 8000 feet and identify the potential effect of hypoxia on safe performance. Methods: Twenty subjects participated in the study, divided in two groups: Group I with mild asymptomatic chronic obstructive pulmonary disease (COPD), and Group II with normal respiratory function. Altitude was simulated at 8000 ft. using gas mixtures. Results: Individuals with mild COPD experienced notable hypoxemia with significant performance decrements and increased number of errors at cabin altitude, compared to normal subjects, whereas their blood pressure significantly increased. PMID:19048098
Latent error detection: A golden two hours for detection.
Saward, Justin R E; Stanton, Neville A
2017-03-01
Undetected error in safety critical contexts generates a latent condition that can contribute to a future safety failure. The detection of latent errors post-task completion is observed in naval air engineers using a diary to record work-related latent error detection (LED) events. A systems view is combined with multi-process theories to explore sociotechnical factors associated with LED. Perception of cues in different environments facilitates successful LED, for which the deliberate review of past tasks within two hours of the error occurring and whilst remaining in the same or similar sociotechnical environment to that which the error occurred appears most effective. Identified ergonomic interventions offer potential mitigation for latent errors; particularly in simple everyday habitual tasks. It is thought safety critical organisations should look to engineer further resilience through the application of LED techniques that engage with system cues across the entire sociotechnical environment, rather than relying on consistent human performance. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
Structured methods for identifying and correcting potential human errors in aviation operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, W.R.
1997-10-01
Human errors have been identified as the source of approximately 60% of the incidents and accidents that occur in commercial aviation. It can be assumed that a very large number of human errors occur in aviation operations, even though in most cases the redundancies and diversities built into the design of aircraft systems prevent the errors from leading to serious consequences. In addition, when it is acknowledged that many system failures have their roots in human errors that occur in the design phase, it becomes apparent that the identification and elimination of potential human errors could significantly decrease the risksmore » of aviation operations. This will become even more critical during the design of advanced automation-based aircraft systems as well as next-generation systems for air traffic management. Structured methods to identify and correct potential human errors in aviation operations have been developed and are currently undergoing testing at the Idaho National Engineering and Environmental Laboratory (INEEL).« less
NASA Astrophysics Data System (ADS)
Ha, Taesung
A probabilistic risk assessment (PRA) was conducted for a loss of coolant accident, (LOCA) in the McMaster Nuclear Reactor (MNR). A level 1 PRA was completed including event sequence modeling, system modeling, and quantification. To support the quantification of the accident sequence identified, data analysis using the Bayesian method and human reliability analysis (HRA) using the accident sequence evaluation procedure (ASEP) approach were performed. Since human performance in research reactors is significantly different from that in power reactors, a time-oriented HRA model (reliability physics model) was applied for the human error probability (HEP) estimation of the core relocation. This model is based on two competing random variables: phenomenological time and performance time. The response surface and direct Monte Carlo simulation with Latin Hypercube sampling were applied for estimating the phenomenological time, whereas the performance time was obtained from interviews with operators. An appropriate probability distribution for the phenomenological time was assigned by statistical goodness-of-fit tests. The human error probability (HEP) for the core relocation was estimated from these two competing quantities: phenomenological time and operators' performance time. The sensitivity of each probability distribution in human reliability estimation was investigated. In order to quantify the uncertainty in the predicted HEPs, a Bayesian approach was selected due to its capability of incorporating uncertainties in model itself and the parameters in that model. The HEP from the current time-oriented model was compared with that from the ASEP approach. Both results were used to evaluate the sensitivity of alternative huinan reliability modeling for the manual core relocation in the LOCA risk model. This exercise demonstrated the applicability of a reliability physics model supplemented with a. Bayesian approach for modeling human reliability and its potential usefulness of quantifying model uncertainty as sensitivity analysis in the PRA model.
Hidalgo-Rodríguez, Marta; Soriano-Meseguer, Sara; Fuguet, Elisabet; Ràfols, Clara; Rosés, Martí
2013-12-18
Several chromatographic systems (three systems of high-performance liquid chromatography and two micellar electrokinetic chromatography systems) besides the reference octanol-water partition system are evaluated by a systematic procedure previously proposed in order to know their ability to model human skin permeation. The precision achieved when skin-water permeability coefficients are correlated against chromatographic retention factors is predicted within the framework of the solvation parameter model. It consists in estimating the contribution of error due to the biological and chromatographic data, as well as the error coming from the dissimilarity between the human skin permeation and the chromatographic systems. Both predictions and experimental tests show that all correlations are greatly affected by the considerable uncertainty of the skin permeability data and the error associated to the dissimilarity between the systems. Correlations with much better predictive abilities are achieved when the volume of the solute is used as additional variable, which illustrates the main roles of both lipophilicity and size of the solute to penetrate through the skin. In this way, the considered systems are able to give precise estimations of human skin permeability coefficients. In particular, the HPLC systems with common C18 columns provide the best performances in emulating the permeation of neutral compounds from aqueous solution through the human skin. As a result, a methodology based on easy, fast, and economical HPLC measurements in a common C18 column has been developed. After a validation based on training and test sets, the method has been applied with good results to the estimation of skin permeation of several hormones and pesticides. Copyright © 2013 Elsevier B.V. All rights reserved.
Kraemer, Sara; Carayon, Pascale
2007-03-01
This paper describes human errors and violations of end users and network administration in computer and information security. This information is summarized in a conceptual framework for examining the human and organizational factors contributing to computer and information security. This framework includes human error taxonomies to describe the work conditions that contribute adversely to computer and information security, i.e. to security vulnerabilities and breaches. The issue of human error and violation in computer and information security was explored through a series of 16 interviews with network administrators and security specialists. The interviews were audio taped, transcribed, and analyzed by coding specific themes in a node structure. The result is an expanded framework that classifies types of human error and identifies specific human and organizational factors that contribute to computer and information security. Network administrators tended to view errors created by end users as more intentional than unintentional, while errors created by network administrators as more unintentional than intentional. Organizational factors, such as communication, security culture, policy, and organizational structure, were the most frequently cited factors associated with computer and information security.
Intervention strategies for the management of human error
NASA Technical Reports Server (NTRS)
Wiener, Earl L.
1993-01-01
This report examines the management of human error in the cockpit. The principles probably apply as well to other applications in the aviation realm (e.g. air traffic control, dispatch, weather, etc.) as well as other high-risk systems outside of aviation (e.g. shipping, high-technology medical procedures, military operations, nuclear power production). Management of human error is distinguished from error prevention. It is a more encompassing term, which includes not only the prevention of error, but also a means of disallowing an error, once made, from adversely affecting system output. Such techniques include: traditional human factors engineering, improvement of feedback and feedforward of information from system to crew, 'error-evident' displays which make erroneous input more obvious to the crew, trapping of errors within a system, goal-sharing between humans and machines (also called 'intent-driven' systems), paperwork management, and behaviorally based approaches, including procedures, standardization, checklist design, training, cockpit resource management, etc. Fifteen guidelines for the design and implementation of intervention strategies are included.
Advanced Outage and Control Center: Strategies for Nuclear Plant Outage Work Status Capabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gregory Weatherby
The research effort is a part of the Light Water Reactor Sustainability (LWRS) Program. LWRS is a research and development program sponsored by the Department of Energy, performed in close collaboration with industry to provide the technical foundations for licensing and managing the long-term, safe and economical operation of current nuclear power plants. The LWRS Program serves to help the US nuclear industry adopt new technologies and engineering solutions that facilitate the continued safe operation of the plants and extension of the current operating licenses. The Outage Control Center (OCC) Pilot Project was directed at carrying out the applied researchmore » for development and pilot of technology designed to enhance safe outage and maintenance operations, improve human performance and reliability, increase overall operational efficiency, and improve plant status control. Plant outage management is a high priority concern for the nuclear industry from cost and safety perspectives. Unfortunately, many of the underlying technologies supporting outage control are the same as those used in the 1980’s. They depend heavily upon large teams of staff, multiple work and coordination locations, and manual administrative actions that require large amounts of paper. Previous work in human reliability analysis suggests that many repetitive tasks, including paper work tasks, may have a failure rate of 1.0E-3 or higher (Gertman, 1996). With between 10,000 and 45,000 subtasks being performed during an outage (Gomes, 1996), the opportunity for human error of some consequence is a realistic concern. Although a number of factors exist that can make these errors recoverable, reducing and effectively coordinating the sheer number of tasks to be performed, particularly those that are error prone, has the potential to enhance outage efficiency and safety. Additionally, outage management requires precise coordination of work groups that do not always share similar objectives. Outage managers are concerned with schedule and cost, union workers are concerned with performing work that is commensurate with their trade, and support functions (safety, quality assurance, and radiological controls, etc.) are concerned with performing the work within the plants controls and procedures. Approaches to outage management should be designed to increase the active participation of work groups and managers in making decisions that closed the gap between competing objectives and the potential for error and process inefficiency.« less
Application of computer vision to automatic prescription verification in pharmaceutical mail order
NASA Astrophysics Data System (ADS)
Alouani, Ali T.
2005-05-01
In large volume pharmaceutical mail order, before shipping out prescriptions, licensed pharmacists ensure that the drug in the bottle matches the information provided in the patient prescription. Typically, the pharmacist has about 2 sec to complete the prescription verification process of one prescription. Performing about 1800 prescription verification per hour is tedious and can generate human errors as a result of visual and brain fatigue. Available automatic drug verification systems are limited to a single pill at a time. This is not suitable for large volume pharmaceutical mail order, where a prescription can have as many as 60 pills and where thousands of prescriptions are filled every day. In an attempt to reduce human fatigue, cost, and limit human error, the automatic prescription verification system (APVS) was invented to meet the need of large scale pharmaceutical mail order. This paper deals with the design and implementation of the first prototype online automatic prescription verification machine to perform the same task currently done by a pharmacist. The emphasis here is on the visual aspects of the machine. The system has been successfully tested on 43,000 prescriptions.
Error-Induced Learning as a Resource-Adaptive Process in Young and Elderly Individuals
NASA Astrophysics Data System (ADS)
Ferdinand, Nicola K.; Weiten, Anja; Mecklinger, Axel; Kray, Jutta
Thorndike described in his law of effect [44] that actions followed by positive events are more likely to be repeated in the future, whereas actions that are followed by negative outcomes are less likely to be repeated. This implies that behavior is evaluated in the light of its potential consequences, and non-reward events (i.e., errors) must be detected for reinforcement learning to take place. In short, humans have to monitor their performance in order to detect and correct errors, and this allows them to successfully adapt their behavior to changing environmental demands and acquire new behavior, i.e., to learn.
The lawful imprecision of human surface tilt estimation in natural scenes
2018-01-01
Estimating local surface orientation (slant and tilt) is fundamental to recovering the three-dimensional structure of the environment. It is unknown how well humans perform this task in natural scenes. Here, with a database of natural stereo-images having groundtruth surface orientation at each pixel, we find dramatic differences in human tilt estimation with natural and artificial stimuli. Estimates are precise and unbiased with artificial stimuli and imprecise and strongly biased with natural stimuli. An image-computable Bayes optimal model grounded in natural scene statistics predicts human bias, precision, and trial-by-trial errors without fitting parameters to the human data. The similarities between human and model performance suggest that the complex human performance patterns with natural stimuli are lawful, and that human visual systems have internalized local image and scene statistics to optimally infer the three-dimensional structure of the environment. These results generalize our understanding of vision from the lab to the real world. PMID:29384477
The lawful imprecision of human surface tilt estimation in natural scenes.
Kim, Seha; Burge, Johannes
2018-01-31
Estimating local surface orientation (slant and tilt) is fundamental to recovering the three-dimensional structure of the environment. It is unknown how well humans perform this task in natural scenes. Here, with a database of natural stereo-images having groundtruth surface orientation at each pixel, we find dramatic differences in human tilt estimation with natural and artificial stimuli. Estimates are precise and unbiased with artificial stimuli and imprecise and strongly biased with natural stimuli. An image-computable Bayes optimal model grounded in natural scene statistics predicts human bias, precision, and trial-by-trial errors without fitting parameters to the human data. The similarities between human and model performance suggest that the complex human performance patterns with natural stimuli are lawful, and that human visual systems have internalized local image and scene statistics to optimally infer the three-dimensional structure of the environment. These results generalize our understanding of vision from the lab to the real world. © 2018, Kim et al.
ERIC Educational Resources Information Center
Boedigheimer, Dan
2010-01-01
Approximately 70% of aviation accidents are attributable to human error. The greatest opportunity for further improving aviation safety is found in reducing human errors in the cockpit. The purpose of this quasi-experimental, mixed-method research was to evaluate whether there was a difference in pilot attitudes toward reducing human error in the…
A new method to evaluate human-robot system performance
NASA Technical Reports Server (NTRS)
Rodriguez, G.; Weisbin, C. R.
2003-01-01
One of the key issues in space exploration is that of deciding what space tasks are best done with humans, with robots, or a suitable combination of each. In general, human and robot skills are complementary. Humans provide as yet unmatched capabilities to perceive, think, and act when faced with anomalies and unforeseen events, but there can be huge potential risks to human safety in getting these benefits. Robots provide complementary skills in being able to work in extremely risky environments, but their ability to perceive, think, and act by themselves is currently not error-free, although these capabilities are continually improving with the emergence of new technologies. Substantial past experience validates these generally qualitative notions. However, there is a need for more rigorously systematic evaluation of human and robot roles, in order to optimize the design and performance of human-robot system architectures using well-defined performance evaluation metrics. This article summarizes a new analytical method to conduct such quantitative evaluations. While the article focuses on evaluating human-robot systems, the method is generally applicable to a much broader class of systems whose performance needs to be evaluated.
Human Factors in Training - Space Medicine Proficiency Training
NASA Technical Reports Server (NTRS)
Connell, Erin; Arsintescu, Lucia
2009-01-01
The early Constellation space missions are expected to have medical capabilities very similar to those currently on the Space Shuttle and International Space Station (ISS). For Crew Exploration Vehicle (CEV) missions to ISS, medical equipment will be located on ISS, and carried into CEV in the event of an emergency. Flight Surgeons (FS) on the ground in Mission Control will be expected to direct the Crew Medical Officer (CMO) during medical situations. If there is a loss of signal and the crew is unable to communicate with the ground, a CMO would be expected to carry out medical procedures without the aid of a FS. In these situations, performance support tools can be used to reduce errors and time to perform emergency medical tasks. Work on medical training has been conducted in collaboration with the Medical Training Group at the Space Life Sciences Directorate and with Wyle Lab which provides medical training to crew members, Biomedical Engineers (BMEs), and to flight surgeons under the JSC Space Life Sciences Directorate s Bioastronautics contract. The space medical training work is part of the Human Factors in Training Directed Research Project (DRP) of the Space Human Factors Engineering (SHFE) Project under the Space Human Factors and Habitability (SHFH) Element of the Human Research Program (HRP). Human factors researchers at Johnson Space Center have recently investigated medical performance support tools for CMOs on-orbit, and FSs on the ground, and researchers at the Ames Research Center performed a literature review on medical errors. The work proposed for FY10 continues to build on this strong collaboration with the Space Medical Training Group and previous research. This abstract focuses on two areas of work involving Performance Support Tools for Space Medical Operations. One area of research building on activities from FY08, involved the feasibility of just-in-time (JIT) training techniques and concepts for real-time medical procedures. In Phase 1, preliminary feasibility data was gathered for two types of prototype display technologies: a hand-held PDA, and a Head Mounted Display (HMD). The PDA and HMD were compared while performing a simulated medical procedure using ISS flight-like medical equipment. Based on the outcome of Phase 1, including data on user preferences, further testing was completed using the PDA only. Phase 2 explored a wrist-mounted PDA, and compared it to a paper cue card. For each phase, time to complete procedures, errors, and user satisfaction ratings were captured.
Evaluating a medical error taxonomy.
Brixey, Juliana; Johnson, Todd R; Zhang, Jiajie
2002-01-01
Healthcare has been slow in using human factors principles to reduce medical errors. The Center for Devices and Radiological Health (CDRH) recognizes that a lack of attention to human factors during product development may lead to errors that have the potential for patient injury, or even death. In response to the need for reducing medication errors, the National Coordinating Council for Medication Errors Reporting and Prevention (NCC MERP) released the NCC MERP taxonomy that provides a standard language for reporting medication errors. This project maps the NCC MERP taxonomy of medication error to MedWatch medical errors involving infusion pumps. Of particular interest are human factors associated with medical device errors. The NCC MERP taxonomy of medication errors is limited in mapping information from MEDWATCH because of the focus on the medical device and the format of reporting.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terezakis, Stephanie A., E-mail: stereza1@jhmi.edu; Harris, Kendra M.; Ford, Eric
Purpose: Systems to ensure patient safety are of critical importance. The electronic incident reporting systems (IRS) of 2 large academic radiation oncology departments were evaluated for events that may be suitable for submission to a national reporting system (NRS). Methods and Materials: All events recorded in the combined IRS were evaluated from 2007 through 2010. Incidents were graded for potential severity using the validated French Nuclear Safety Authority (ASN) 5-point scale. These incidents were categorized into 7 groups: (1) human error, (2) software error, (3) hardware error, (4) error in communication between 2 humans, (5) error at the human-software interface,more » (6) error at the software-hardware interface, and (7) error at the human-hardware interface. Results: Between the 2 systems, 4407 incidents were reported. Of these events, 1507 (34%) were considered to have the potential for clinical consequences. Of these 1507 events, 149 (10%) were rated as having a potential severity of ≥2. Of these 149 events, the committee determined that 79 (53%) of these events would be submittable to a NRS of which the majority was related to human error or to the human-software interface. Conclusions: A significant number of incidents were identified in this analysis. The majority of events in this study were related to human error and to the human-software interface, further supporting the need for a NRS to facilitate field-wide learning and system improvement.« less
A bottom-up model of spatial attention predicts human error patterns in rapid scene recognition.
Einhäuser, Wolfgang; Mundhenk, T Nathan; Baldi, Pierre; Koch, Christof; Itti, Laurent
2007-07-20
Humans demonstrate a peculiar ability to detect complex targets in rapidly presented natural scenes. Recent studies suggest that (nearly) no focal attention is required for overall performance in such tasks. Little is known, however, of how detection performance varies from trial to trial and which stages in the processing hierarchy limit performance: bottom-up visual processing (attentional selection and/or recognition) or top-down factors (e.g., decision-making, memory, or alertness fluctuations)? To investigate the relative contribution of these factors, eight human observers performed an animal detection task in natural scenes presented at 20 Hz. Trial-by-trial performance was highly consistent across observers, far exceeding the prediction of independent errors. This consistency demonstrates that performance is not primarily limited by idiosyncratic factors but by visual processing. Two statistical stimulus properties, contrast variation in the target image and the information-theoretical measure of "surprise" in adjacent images, predict performance on a trial-by-trial basis. These measures are tightly related to spatial attention, demonstrating that spatial attention and rapid target detection share common mechanisms. To isolate the causal contribution of the surprise measure, eight additional observers performed the animal detection task in sequences that were reordered versions of those all subjects had correctly recognized in the first experiment. Reordering increased surprise before and/or after the target while keeping the target and distractors themselves unchanged. Surprise enhancement impaired target detection in all observers. Consequently, and contrary to several previously published findings, our results demonstrate that attentional limitations, rather than target recognition alone, affect the detection of targets in rapidly presented visual sequences.
Jeon, Hong Jin; Kim, Ji-Hae; Kim, Bin-Na; Park, Seung Jin; Fava, Maurizio; Mischoulon, David; Kang, Eun-Ho; Roh, Sungwon; Lee, Dongsoo
2014-12-01
Human error is defined as an unintended error that is attributable to humans rather than machines, and that is important to avoid to prevent accidents. We aimed to investigate the association between sleep quality and human errors among train drivers. Cross-sectional. Population-based. A sample of 5,480 subjects who were actively working as train drivers were recruited in South Korea. The participants were 4,634 drivers who completed all questionnaires (response rate 84.6%). None. The Pittsburgh Sleep Quality Index (PSQI), the Center for Epidemiologic Studies Depression Scale (CES-D), the Impact of Event Scale-Revised (IES-R), the State-Trait Anxiety Inventory (STAI), and the Korean Occupational Stress Scale (KOSS). Of 4,634 train drivers, 349 (7.5%) showed more than one human error per 5 y. Human errors were associated with poor sleep quality, higher PSQI total scores, short sleep duration at night, and longer sleep latency. Among train drivers with poor sleep quality, those who experienced severe posttraumatic stress showed a significantly higher number of human errors than those without. Multiple logistic regression analysis showed that human errors were significantly associated with poor sleep quality and posttraumatic stress, whereas there were no significant associations with depression, trait and state anxiety, and work stress after adjusting for age, sex, education years, marital status, and career duration. Poor sleep quality was found to be associated with more human errors in train drivers, especially in those who experienced severe posttraumatic stress. © 2014 Associated Professional Sleep Societies, LLC.
Emken, Jeremy L; Benitez, Raul; Reinkensmeyer, David J
2007-03-28
A prevailing paradigm of physical rehabilitation following neurologic injury is to "assist-as-needed" in completing desired movements. Several research groups are attempting to automate this principle with robotic movement training devices and patient cooperative algorithms that encourage voluntary participation. These attempts are currently not based on computational models of motor learning. Here we assume that motor recovery from a neurologic injury can be modelled as a process of learning a novel sensory motor transformation, which allows us to study a simplified experimental protocol amenable to mathematical description. Specifically, we use a robotic force field paradigm to impose a virtual impairment on the left leg of unimpaired subjects walking on a treadmill. We then derive an "assist-as-needed" robotic training algorithm to help subjects overcome the virtual impairment and walk normally. The problem is posed as an optimization of performance error and robotic assistance. The optimal robotic movement trainer becomes an error-based controller with a forgetting factor that bounds kinematic errors while systematically reducing its assistance when those errors are small. As humans have a natural range of movement variability, we introduce an error weighting function that causes the robotic trainer to disregard this variability. We experimentally validated the controller with ten unimpaired subjects by demonstrating how it helped the subjects learn the novel sensory motor transformation necessary to counteract the virtual impairment, while also preventing them from experiencing large kinematic errors. The addition of the error weighting function allowed the robot assistance to fade to zero even though the subjects' movements were variable. We also show that in order to assist-as-needed, the robot must relax its assistance at a rate faster than that of the learning human. The assist-as-needed algorithm proposed here can limit error during the learning of a dynamic motor task. The algorithm encourages learning by decreasing its assistance as a function of the ongoing progression of movement error. This type of algorithm is well suited for helping people learn dynamic tasks for which large kinematic errors are dangerous or discouraging, and thus may prove useful for robot-assisted movement training of walking or reaching following neurologic injury.
Zeleke, Berihun M.; Abramson, Michael J.; Benke, Geza
2018-01-01
Uncertainty in experimental studies of exposure to radiation from mobile phones has in the past only been framed within the context of statistical variability. It is now becoming more apparent to researchers that epistemic or reducible uncertainties can also affect the total error in results. These uncertainties are derived from a wide range of sources including human error, such as data transcription, model structure, measurement and linguistic errors in communication. The issue of epistemic uncertainty is reviewed and interpreted in the context of the MoRPhEUS, ExPOSURE and HERMES cohort studies which investigate the effect of radiofrequency electromagnetic radiation from mobile phones on memory performance. Research into this field has found inconsistent results due to limitations from a range of epistemic sources. Potential analytic approaches are suggested based on quantification of epistemic error using Monte Carlo simulation. It is recommended that future studies investigating the relationship between radiofrequency electromagnetic radiation and memory performance pay more attention to treatment of epistemic uncertainties as well as further research into improving exposure assessment. Use of directed acyclic graphs is also encouraged to display the assumed covariate relationship. PMID:29587425
An MEG signature corresponding to an axiomatic model of reward prediction error.
Talmi, Deborah; Fuentemilla, Lluis; Litvak, Vladimir; Duzel, Emrah; Dolan, Raymond J
2012-01-02
Optimal decision-making is guided by evaluating the outcomes of previous decisions. Prediction errors are theoretical teaching signals which integrate two features of an outcome: its inherent value and prior expectation of its occurrence. To uncover the magnetic signature of prediction errors in the human brain we acquired magnetoencephalographic (MEG) data while participants performed a gambling task. Our primary objective was to use formal criteria, based upon an axiomatic model (Caplin and Dean, 2008a), to determine the presence and timing profile of MEG signals that express prediction errors. We report analyses at the sensor level, implemented in SPM8, time locked to outcome onset. We identified, for the first time, a MEG signature of prediction error, which emerged approximately 320 ms after an outcome and expressed as an interaction between outcome valence and probability. This signal followed earlier, separate signals for outcome valence and probability, which emerged approximately 200 ms after an outcome. Strikingly, the time course of the prediction error signal, as well as the early valence signal, resembled the Feedback-Related Negativity (FRN). In simultaneously acquired EEG data we obtained a robust FRN, but the win and loss signals that comprised this difference wave did not comply with the axiomatic model. Our findings motivate an explicit examination of the critical issue of timing embodied in computational models of prediction errors as seen in human electrophysiological data. Copyright © 2011 Elsevier Inc. All rights reserved.
Automatic image registration performance for two different CBCT systems; variation with imaging dose
NASA Astrophysics Data System (ADS)
Barber, J.; Sykes, J. R.; Holloway, L.; Thwaites, D. I.
2014-03-01
The performance of an automatic image registration algorithm was compared on image sets collected with two commercial CBCT systems, and the relationship with imaging dose was explored. CBCT images of a CIRS Virtually Human Male Pelvis phantom (VHMP) were collected on Varian TrueBeam/OBI and Elekta Synergy/XVI linear accelerators, across a range of mAs settings. Each CBCT image was registered 100 times, with random initial offsets introduced. Image registration was performed using the grey value correlation ratio algorithm in the Elekta XVI software, to a mask of the prostate volume with 5 mm expansion. Residual registration errors were calculated after correcting for the initial introduced phantom set-up error. Registration performance with the OBI images was similar to that of XVI. There was a clear dependence on imaging dose for the XVI images with residual errors increasing below 4mGy. It was not possible to acquire images with doses lower than ~5mGy with the OBI system and no evidence of reduced performance was observed at this dose. Registration failures (maximum target registration error > 3.6 mm on the surface of a 30mm sphere) occurred in 5% to 9% of registrations except for the lowest dose XVI scan (31%). The uncertainty in automatic image registration with both OBI and XVI images was found to be adequate for clinical use within a normal range of acquisition settings.
Reducing Wrong Patient Selection Errors: Exploring the Design Space of User Interface Techniques
Sopan, Awalin; Plaisant, Catherine; Powsner, Seth; Shneiderman, Ben
2014-01-01
Wrong patient selection errors are a major issue for patient safety; from ordering medication to performing surgery, the stakes are high. Widespread adoption of Electronic Health Record (EHR) and Computerized Provider Order Entry (CPOE) systems makes patient selection using a computer screen a frequent task for clinicians. Careful design of the user interface can help mitigate the problem by helping providers recall their patients’ identities, accurately select their names, and spot errors before orders are submitted. We propose a catalog of twenty seven distinct user interface techniques, organized according to a task analysis. An associated video demonstrates eighteen of those techniques. EHR designers who consider a wider range of human-computer interaction techniques could reduce selection errors, but verification of efficacy is still needed. PMID:25954415
Association rule mining on grid monitoring data to detect error sources
NASA Astrophysics Data System (ADS)
Maier, Gerhild; Schiffers, Michael; Kranzlmueller, Dieter; Gaidioz, Benjamin
2010-04-01
Error handling is a crucial task in an infrastructure as complex as a grid. There are several monitoring tools put in place, which report failing grid jobs including exit codes. However, the exit codes do not always denote the actual fault, which caused the job failure. Human time and knowledge is required to manually trace back errors to the real fault underlying an error. We perform association rule mining on grid job monitoring data to automatically retrieve knowledge about the grid components' behavior by taking dependencies between grid job characteristics into account. Therewith, problematic grid components are located automatically and this information - expressed by association rules - is visualized in a web interface. This work achieves a decrease in time for fault recovery and yields an improvement of a grid's reliability.
Reducing wrong patient selection errors: exploring the design space of user interface techniques.
Sopan, Awalin; Plaisant, Catherine; Powsner, Seth; Shneiderman, Ben
2014-01-01
Wrong patient selection errors are a major issue for patient safety; from ordering medication to performing surgery, the stakes are high. Widespread adoption of Electronic Health Record (EHR) and Computerized Provider Order Entry (CPOE) systems makes patient selection using a computer screen a frequent task for clinicians. Careful design of the user interface can help mitigate the problem by helping providers recall their patients' identities, accurately select their names, and spot errors before orders are submitted. We propose a catalog of twenty seven distinct user interface techniques, organized according to a task analysis. An associated video demonstrates eighteen of those techniques. EHR designers who consider a wider range of human-computer interaction techniques could reduce selection errors, but verification of efficacy is still needed.
Social deviance activates the brain's error-monitoring system.
Kim, Bo-Rin; Liss, Alison; Rao, Monica; Singer, Zachary; Compton, Rebecca J
2012-03-01
Social psychologists have long noted the tendency for human behavior to conform to social group norms. This study examined whether feedback indicating that participants had deviated from group norms would elicit a neural signal previously shown to be elicited by errors and monetary losses. While electroencephalograms were recorded, participants (N = 30) rated the attractiveness of 120 faces and received feedback giving the purported average rating made by a group of peers. The feedback was manipulated so that group ratings either were the same as a participant's rating or deviated by 1, 2, or 3 points. Feedback indicating deviance from the group norm elicited a feedback-related negativity, a brainwave signal known to be elicited by objective performance errors and losses. The results imply that the brain treats deviance from social norms as an error.
Human Error and the International Space Station: Challenges and Triumphs in Science Operations
NASA Technical Reports Server (NTRS)
Harris, Samantha S.; Simpson, Beau C.
2016-01-01
Any system with a human component is inherently risky. Studies in human factors and psychology have repeatedly shown that human operators will inevitably make errors, regardless of how well they are trained. Onboard the International Space Station (ISS) where crew time is arguably the most valuable resource, errors by the crew or ground operators can be costly to critical science objectives. Operations experts at the ISS Payload Operations Integration Center (POIC), located at NASA's Marshall Space Flight Center in Huntsville, Alabama, have learned that from payload concept development through execution, there are countless opportunities to introduce errors that can potentially result in costly losses of crew time and science. To effectively address this challenge, we must approach the design, testing, and operation processes with two specific goals in mind. First, a systematic approach to error and human centered design methodology should be implemented to minimize opportunities for user error. Second, we must assume that human errors will be made and enable rapid identification and recoverability when they occur. While a systematic approach and human centered development process can go a long way toward eliminating error, the complete exclusion of operator error is not a reasonable expectation. The ISS environment in particular poses challenging conditions, especially for flight controllers and astronauts. Operating a scientific laboratory 250 miles above the Earth is a complicated and dangerous task with high stakes and a steep learning curve. While human error is a reality that may never be fully eliminated, smart implementation of carefully chosen tools and techniques can go a long way toward minimizing risk and increasing the efficiency of NASA's space science operations.
Johnson, Reva E; Kording, Konrad P; Hargrove, Levi J; Sensinger, Jonathon W
2017-06-01
In this paper we asked the question: if we artificially raise the variability of torque control signals to match that of EMG, do subjects make similar errors and have similar uncertainty about their movements? We answered this question using two experiments in which subjects used three different control signals: torque, torque+noise, and EMG. First, we measured error on a simple target-hitting task in which subjects received visual feedback only at the end of their movements. We found that even when the signal-to-noise ratio was equal across EMG and torque+noise control signals, EMG resulted in larger errors. Second, we quantified uncertainty by measuring the just-noticeable difference of a visual perturbation. We found that for equal errors, EMG resulted in higher movement uncertainty than both torque and torque+noise. The differences suggest that performance and confidence are influenced by more than just the noisiness of the control signal, and suggest that other factors, such as the user's ability to incorporate feedback and develop accurate internal models, also have significant impacts on the performance and confidence of a person's actions. We theorize that users have difficulty distinguishing between random and systematic errors for EMG control, and future work should examine in more detail the types of errors made with EMG control.
Modeling human response errors in synthetic flight simulator domain
NASA Technical Reports Server (NTRS)
Ntuen, Celestine A.
1992-01-01
This paper presents a control theoretic approach to modeling human response errors (HRE) in the flight simulation domain. The human pilot is modeled as a supervisor of a highly automated system. The synthesis uses the theory of optimal control pilot modeling for integrating the pilot's observation error and the error due to the simulation model (experimental error). Methods for solving the HRE problem are suggested. Experimental verification of the models will be tested in a flight quality handling simulation.
Marathe, A R; Taylor, D M
2015-08-01
Decoding algorithms for brain-machine interfacing (BMI) are typically only optimized to reduce the magnitude of decoding errors. Our goal was to systematically quantify how four characteristics of BMI command signals impact closed-loop performance: (1) error magnitude, (2) distribution of different frequency components in the decoding errors, (3) processing delays, and (4) command gain. To systematically evaluate these different command features and their interactions, we used a closed-loop BMI simulator where human subjects used their own wrist movements to command the motion of a cursor to targets on a computer screen. Random noise with three different power distributions and four different relative magnitudes was added to the ongoing cursor motion in real time to simulate imperfect decoding. These error characteristics were tested with four different visual feedback delays and two velocity gains. Participants had significantly more trouble correcting for errors with a larger proportion of low-frequency, slow-time-varying components than they did with jittery, higher-frequency errors, even when the error magnitudes were equivalent. When errors were present, a movement delay often increased the time needed to complete the movement by an order of magnitude more than the delay itself. Scaling down the overall speed of the velocity command can actually speed up target acquisition time when low-frequency errors and delays are present. This study is the first to systematically evaluate how the combination of these four key command signal features (including the relatively-unexplored error power distribution) and their interactions impact closed-loop performance independent of any specific decoding method. The equations we derive relating closed-loop movement performance to these command characteristics can provide guidance on how best to balance these different factors when designing BMI systems. The equations reported here also provide an efficient way to compare a diverse range of decoding options offline.
NASA Astrophysics Data System (ADS)
Marathe, A. R.; Taylor, D. M.
2015-08-01
Objective. Decoding algorithms for brain-machine interfacing (BMI) are typically only optimized to reduce the magnitude of decoding errors. Our goal was to systematically quantify how four characteristics of BMI command signals impact closed-loop performance: (1) error magnitude, (2) distribution of different frequency components in the decoding errors, (3) processing delays, and (4) command gain. Approach. To systematically evaluate these different command features and their interactions, we used a closed-loop BMI simulator where human subjects used their own wrist movements to command the motion of a cursor to targets on a computer screen. Random noise with three different power distributions and four different relative magnitudes was added to the ongoing cursor motion in real time to simulate imperfect decoding. These error characteristics were tested with four different visual feedback delays and two velocity gains. Main results. Participants had significantly more trouble correcting for errors with a larger proportion of low-frequency, slow-time-varying components than they did with jittery, higher-frequency errors, even when the error magnitudes were equivalent. When errors were present, a movement delay often increased the time needed to complete the movement by an order of magnitude more than the delay itself. Scaling down the overall speed of the velocity command can actually speed up target acquisition time when low-frequency errors and delays are present. Significance. This study is the first to systematically evaluate how the combination of these four key command signal features (including the relatively-unexplored error power distribution) and their interactions impact closed-loop performance independent of any specific decoding method. The equations we derive relating closed-loop movement performance to these command characteristics can provide guidance on how best to balance these different factors when designing BMI systems. The equations reported here also provide an efficient way to compare a diverse range of decoding options offline.
Role of dopamine D2 receptors in human reinforcement learning.
Eisenegger, Christoph; Naef, Michael; Linssen, Anke; Clark, Luke; Gandamaneni, Praveen K; Müller, Ulrich; Robbins, Trevor W
2014-09-01
Influential neurocomputational models emphasize dopamine (DA) as an electrophysiological and neurochemical correlate of reinforcement learning. However, evidence of a specific causal role of DA receptors in learning has been less forthcoming, especially in humans. Here we combine, in a between-subjects design, administration of a high dose of the selective DA D2/3-receptor antagonist sulpiride with genetic analysis of the DA D2 receptor in a behavioral study of reinforcement learning in a sample of 78 healthy male volunteers. In contrast to predictions of prevailing models emphasizing DA's pivotal role in learning via prediction errors, we found that sulpiride did not disrupt learning, but rather induced profound impairments in choice performance. The disruption was selective for stimuli indicating reward, whereas loss avoidance performance was unaffected. Effects were driven by volunteers with higher serum levels of the drug, and in those with genetically determined lower density of striatal DA D2 receptors. This is the clearest demonstration to date for a causal modulatory role of the DA D2 receptor in choice performance that might be distinct from learning. Our findings challenge current reward prediction error models of reinforcement learning, and suggest that classical animal models emphasizing a role of postsynaptic DA D2 receptors in motivational aspects of reinforcement learning may apply to humans as well.
Role of Dopamine D2 Receptors in Human Reinforcement Learning
Eisenegger, Christoph; Naef, Michael; Linssen, Anke; Clark, Luke; Gandamaneni, Praveen K; Müller, Ulrich; Robbins, Trevor W
2014-01-01
Influential neurocomputational models emphasize dopamine (DA) as an electrophysiological and neurochemical correlate of reinforcement learning. However, evidence of a specific causal role of DA receptors in learning has been less forthcoming, especially in humans. Here we combine, in a between-subjects design, administration of a high dose of the selective DA D2/3-receptor antagonist sulpiride with genetic analysis of the DA D2 receptor in a behavioral study of reinforcement learning in a sample of 78 healthy male volunteers. In contrast to predictions of prevailing models emphasizing DA's pivotal role in learning via prediction errors, we found that sulpiride did not disrupt learning, but rather induced profound impairments in choice performance. The disruption was selective for stimuli indicating reward, whereas loss avoidance performance was unaffected. Effects were driven by volunteers with higher serum levels of the drug, and in those with genetically determined lower density of striatal DA D2 receptors. This is the clearest demonstration to date for a causal modulatory role of the DA D2 receptor in choice performance that might be distinct from learning. Our findings challenge current reward prediction error models of reinforcement learning, and suggest that classical animal models emphasizing a role of postsynaptic DA D2 receptors in motivational aspects of reinforcement learning may apply to humans as well. PMID:24713613
Practice increases procedural errors after task interruption.
Altmann, Erik M; Hambrick, David Z
2017-05-01
Positive effects of practice are ubiquitous in human performance, but a finding from memory research suggests that negative effects are possible also. The finding is that memory for items on a list depends on the time interval between item presentations. This finding predicts a negative effect of practice on procedural performance under conditions of task interruption. As steps of a procedure are performed more quickly, memory for past performance should become less accurate, increasing the rate of skipped or repeated steps after an interruption. We found this effect, with practice generally improving speed and accuracy, but impairing accuracy after interruptions. The results show that positive effects of practice can interact with architectural constraints on episodic memory to have negative effects on performance. In practical terms, the results suggest that practice can be a risk factor for procedural errors in task environments with a high incidence of task interruption. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Defining the Relationship Between Human Error Classes and Technology Intervention Strategies
NASA Technical Reports Server (NTRS)
Wiegmann, Douglas A.; Rantanen, Esa; Crisp, Vicki K. (Technical Monitor)
2002-01-01
One of the main factors in all aviation accidents is human error. The NASA Aviation Safety Program (AvSP), therefore, has identified several human-factors safety technologies to address this issue. Some technologies directly address human error either by attempting to reduce the occurrence of errors or by mitigating the negative consequences of errors. However, new technologies and system changes may also introduce new error opportunities or even induce different types of errors. Consequently, a thorough understanding of the relationship between error classes and technology "fixes" is crucial for the evaluation of intervention strategies outlined in the AvSP, so that resources can be effectively directed to maximize the benefit to flight safety. The purpose of the present project, therefore, was to examine the repositories of human factors data to identify the possible relationship between different error class and technology intervention strategies. The first phase of the project, which is summarized here, involved the development of prototype data structures or matrices that map errors onto "fixes" (and vice versa), with the hope of facilitating the development of standards for evaluating safety products. Possible follow-on phases of this project are also discussed. These additional efforts include a thorough and detailed review of the literature to fill in the data matrix and the construction of a complete database and standards checklists.
Li, Wen-Chin; Harris, Don; Yu, Chung-San
2008-03-01
The human factors analysis and classification system (HFACS) is based upon Reason's organizational model of human error. HFACS was developed as an analytical framework for the investigation of the role of human error in aviation accidents, however, there is little empirical work formally describing the relationship between the components in the model. This research analyses 41 civil aviation accidents occurring to aircraft registered in the Republic of China (ROC) between 1999 and 2006 using the HFACS framework. The results show statistically significant relationships between errors at the operational level and organizational inadequacies at both the immediately adjacent level (preconditions for unsafe acts) and higher levels in the organization (unsafe supervision and organizational influences). The pattern of the 'routes to failure' observed in the data from this analysis of civil aircraft accidents show great similarities to that observed in the analysis of military accidents. This research lends further support to Reason's model that suggests that active failures are promoted by latent conditions in the organization. Statistical relationships linking fallible decisions in upper management levels were found to directly affect supervisory practices, thereby creating the psychological preconditions for unsafe acts and hence indirectly impairing the performance of pilots, ultimately leading to accidents.
Chana, Narinder; Porat, Talya; Whittlesea, Cate; Delaney, Brendan
2017-03-01
Electronic prescribing has benefited from computerised clinical decision support systems (CDSSs); however, no published studies have evaluated the potential for a CDSS to support GPs in prescribing specialist drugs. To identify potential weaknesses and errors in the existing process of prescribing specialist drugs that could be addressed in the development of a CDSS. Semi-structured interviews with key informants followed by an observational study involving GPs in the UK. Twelve key informants were interviewed to investigate the use of CDSSs in the UK. Nine GPs were observed while performing case scenarios depicting requests from hospitals or patients to prescribe a specialist drug. Activity diagrams, hierarchical task analysis, and systematic human error reduction and prediction approach analyses were performed. The current process of prescribing specialist drugs by GPs is prone to error. Errors of omission due to lack of information were the most common errors, which could potentially result in a GP prescribing a specialist drug that should only be prescribed in hospitals, or prescribing a specialist drug without reference to a shared care protocol. Half of all possible errors in the prescribing process had a high probability of occurrence. A CDSS supporting GPs during the process of prescribing specialist drugs is needed. This could, first, support the decision making of whether or not to undertake prescribing, and, second, provide drug-specific parameters linked to shared care protocols, which could reduce the errors identified and increase patient safety. © British Journal of General Practice 2017.
Reflections on human error - Matters of life and death
NASA Technical Reports Server (NTRS)
Wiener, Earl L.
1989-01-01
The last two decades have witnessed a rapid growth in the introduction of automatic devices into aircraft cockpits, and eleswhere in human-machine systems. This was motivated in part by the assumption that when human functioning is replaced by machine functioning, human error is eliminated. Experience to date shows that this is far from true, and that automation does not replace humans, but changes their role in the system, as well as the types and severity of the errors they make. This altered role may lead to fewer, but more critical errors. Intervention strategies to prevent these errors, or ameliorate their consequences include basic human factors engineering of the interface, enhanced warning and alerting systems, and more intelligent interfaces that understand the strategic intent of the crew and can detect and trap inconsistent or erroneous input before it affects the system.
Human performance in the modern cockpit
NASA Technical Reports Server (NTRS)
Dismukes, R. K.; Cohen, M. M.
1992-01-01
This panel was organized by the Aerospace Human Factors Committee to illustrate behavioral research on the perceptual, cognitive, and group processes that determine crew effectiveness in modern cockpits. Crew reactions to the introduction of highly automated systems in the cockpit will be reported on. Automation can improve operational capabilities and efficiency and can reduce some types of human error, but may also introduce entirely new opportunities for error. The problem solving and decision making strategies used by crews led by captains with various personality profiles will be discussed. Also presented will be computational approaches to modeling the cognitive demands of cockpit operations and the cognitive capabilities and limitations of crew members. Factors contributing to aircrew deviations from standard operating procedures and misuse of checklist, often leading to violations, incidents, or accidents will be examined. The mechanisms of visual perception pilots use in aircraft control and the implications of these mechanisms for effective design of visual displays will be discussed.
Using model order tests to determine sensory inputs in a motion study
NASA Technical Reports Server (NTRS)
Repperger, D. W.; Junker, A. M.
1977-01-01
In the study of motion effects on tracking performance, a problem of interest is the determination of what sensory inputs a human uses in controlling his tracking task. In the approach presented here a simple canonical model (FID or a proportional, integral, derivative structure) is used to model the human's input-output time series. A study of significant changes in reduction of the output error loss functional is conducted as different permutations of parameters are considered. Since this canonical model includes parameters which are related to inputs to the human (such as the error signal, its derivatives and integration), the study of model order is equivalent to the study of which sensory inputs are being used by the tracker. The parameters are obtained which have the greatest effect on reducing the loss function significantly. In this manner the identification procedure converts the problem of testing for model order into the problem of determining sensory inputs.
Manually locating physical and virtual reality objects.
Chen, Karen B; Kimmel, Ryan A; Bartholomew, Aaron; Ponto, Kevin; Gleicher, Michael L; Radwin, Robert G
2014-09-01
In this study, we compared how users locate physical and equivalent three-dimensional images of virtual objects in a cave automatic virtual environment (CAVE) using the hand to examine how human performance (accuracy, time, and approach) is affected by object size, location, and distance. Virtual reality (VR) offers the promise to flexibly simulate arbitrary environments for studying human performance. Previously, VR researchers primarily considered differences between virtual and physical distance estimation rather than reaching for close-up objects. Fourteen participants completed manual targeting tasks that involved reaching for corners on equivalent physical and virtual boxes of three different sizes. Predicted errors were calculated from a geometric model based on user interpupillary distance, eye location, distance from the eyes to the projector screen, and object. Users were 1.64 times less accurate (p < .001) and spent 1.49 times more time (p = .01) targeting virtual versus physical box corners using the hands. Predicted virtual targeting errors were on average 1.53 times (p < .05) greater than the observed errors for farther virtual targets but not significantly different for close-up virtual targets. Target size, location, and distance, in addition to binocular disparity, affected virtual object targeting inaccuracy. Observed virtual box inaccuracy was less than predicted for farther locations, suggesting possible influence of cues other than binocular vision. Human physical interaction with objects in VR for simulation, training, and prototyping involving reaching and manually handling virtual objects in a CAVE are more accurate than predicted when locating farther objects.
Karsh, B‐T; Holden, R J; Alper, S J; Or, C K L
2006-01-01
The goal of improving patient safety has led to a number of paradigms for directing improvement efforts. The main paradigms to date have focused on reducing injuries, reducing errors, or improving evidence based practice. In this paper a human factors engineering paradigm is proposed that focuses on designing systems to improve the performance of healthcare professionals and to reduce hazards. Both goals are necessary, but neither is sufficient to improve safety. We suggest that the road to patient and employee safety runs through the healthcare professional who delivers care. To that end, several arguments are provided to show that designing healthcare delivery systems to support healthcare professional performance and hazard reduction should yield significant patient safety benefits. The concepts of human performance and hazard reduction are explained. PMID:17142611
Effects of Retinal Eccentricity on Human Manual Control
NASA Technical Reports Server (NTRS)
Popovici, Alexandru; Zaal, Peter M. T.
2017-01-01
This study investigated the effects of viewing a primary flight display at different retinal eccentricities on human manual control behavior and performance. Ten participants performed a pitch tracking task while looking at a simplified primary flight display at different horizontal and vertical retinal eccentricities, and with two different controlled dynamics. Tracking performance declined at higher eccentricity angles and participants behaved more nonlinearly. The visual error rate gain increased with eccentricity for single-integrator-like controlled dynamics, but decreased for double-integrator-like dynamics. Participants' visual time delay was up to 100 ms higher at the highest horizontal eccentricity compared to foveal viewing. Overall, vertical eccentricity had a larger impact than horizontal eccentricity on most of the human manual control parameters and performance. Results might be useful in the design of displays and procedures for critical flight conditions such as in an aerodynamic stall.
Human-simulation-based learning to prevent medication error: A systematic review.
Sarfati, Laura; Ranchon, Florence; Vantard, Nicolas; Schwiertz, Vérane; Larbre, Virginie; Parat, Stéphanie; Faudel, Amélie; Rioufol, Catherine
2018-01-31
In the past 2 decades, there has been an increasing interest in simulation-based learning programs to prevent medication error (ME). To improve knowledge, skills, and attitudes in prescribers, nurses, and pharmaceutical staff, these methods enable training without directly involving patients. However, best practices for simulation for healthcare providers are as yet undefined. By analysing the current state of experience in the field, the present review aims to assess whether human simulation in healthcare helps to reduce ME. A systematic review was conducted on Medline from 2000 to June 2015, associating the terms "Patient Simulation," "Medication Errors," and "Simulation Healthcare." Reports of technology-based simulation were excluded, to focus exclusively on human simulation in nontechnical skills learning. Twenty-one studies assessing simulation-based learning programs were selected, focusing on pharmacy, medicine or nursing students, or concerning programs aimed at reducing administration or preparation errors, managing crises, or learning communication skills for healthcare professionals. The studies varied in design, methodology, and assessment criteria. Few demonstrated that simulation was more effective than didactic learning in reducing ME. This review highlights a lack of long-term assessment and real-life extrapolation, with limited scenarios and participant samples. These various experiences, however, help in identifying the key elements required for an effective human simulation-based learning program for ME prevention: ie, scenario design, debriefing, and perception assessment. The performance of these programs depends on their ability to reflect reality and on professional guidance. Properly regulated simulation is a good way to train staff in events that happen only exceptionally, as well as in standard daily activities. By integrating human factors, simulation seems to be effective in preventing iatrogenic risk related to ME, if the program is well designed. © 2018 John Wiley & Sons, Ltd.
Fusing face-verification algorithms and humans.
O'Toole, Alice J; Abdi, Hervé; Jiang, Fang; Phillips, P Jonathon
2007-10-01
It has been demonstrated recently that state-of-the-art face-recognition algorithms can surpass human accuracy at matching faces over changes in illumination. The ranking of algorithms and humans by accuracy, however, does not provide information about whether algorithms and humans perform the task comparably or whether algorithms and humans can be fused to improve performance. In this paper, we fused humans and algorithms using partial least square regression (PLSR). In the first experiment, we applied PLSR to face-pair similarity scores generated by seven algorithms participating in the Face Recognition Grand Challenge. The PLSR produced an optimal weighting of the similarity scores, which we tested for generality with a jackknife procedure. Fusing the algorithms' similarity scores using the optimal weights produced a twofold reduction of error rate over the most accurate algorithm. Next, human-subject-generated similarity scores were added to the PLSR analysis. Fusing humans and algorithms increased the performance to near-perfect classification accuracy. These results are discussed in terms of maximizing face-verification accuracy with hybrid systems consisting of multiple algorithms and humans.
Predictive models of safety based on audit findings: Part 1: Model development and reliability.
Hsiao, Yu-Lin; Drury, Colin; Wu, Changxu; Paquet, Victor
2013-03-01
This consecutive study was aimed at the quantitative validation of safety audit tools as predictors of safety performance, as we were unable to find prior studies that tested audit validity against safety outcomes. An aviation maintenance domain was chosen for this work as both audits and safety outcomes are currently prescribed and regulated. In Part 1, we developed a Human Factors/Ergonomics classification framework based on HFACS model (Shappell and Wiegmann, 2001a,b), for the human errors detected by audits, because merely counting audit findings did not predict future safety. The framework was tested for measurement reliability using four participants, two of whom classified errors on 1238 audit reports. Kappa values leveled out after about 200 audits at between 0.5 and 0.8 for different tiers of errors categories. This showed sufficient reliability to proceed with prediction validity testing in Part 2. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.
NASA Technical Reports Server (NTRS)
Deloach, R.
1981-01-01
The Fraction Impact Method (FIM), developed by the National Research Council (NRC) for assessing the amount and physiological effect of noise, is described. Here, the number of people exposed to a given level of noise is multiplied by a weighting factor that depends on noise level. It is pointed out that the Aircraft-noise Levels and Annoyance MOdel (ALAMO), recently developed at NASA Langley Research Center, can perform the NRC fractional impact calculations for given modes of operation at any U.S. airport. The sensitivity of these calculations to errors in estimates of population, noise level, and human subjective response is discussed. It is found that a change in source noise causes a substantially smaller change in contour area than would be predicted simply on the basis of inverse square law considerations. Another finding is that the impact calculations are generally less sensitive to source noise errors than to systematic errors in population or subjective response.
Human Research Program Space Human Factors Engineering (SHFE) Standing Review Panel (SRP)
NASA Technical Reports Server (NTRS)
Wichansky, Anna; Badler, Norman; Butler, Keith; Cummings, Mary; DeLucia, Patricia; Endsley, Mica; Scholtz, Jean
2009-01-01
The Space Human Factors Engineering (SHFE) Standing Review Panel (SRP) evaluated 22 gaps and 39 tasks in the three risk areas assigned to the SHFE Project. The area where tasks were best designed to close the gaps and the fewest gaps were left out was the Risk of Reduced Safety and Efficiency dire to Inadequate Design of Vehicle, Environment, Tools or Equipment. The areas where there were more issues with gaps and tasks, including poor or inadequate fit of tasks to gaps and missing gaps, were Risk of Errors due to Poor Task Design and Risk of Error due to Inadequate Information. One risk, the Risk of Errors due to Inappropriate Levels of Trust in Automation, should be added. If astronauts trust automation too much in areas where it should not be trusted, but rather tempered with human judgment and decision making, they will incur errors. Conversely, if they do not trust automation when it should be trusted, as in cases where it can sense aspects of the environment such as radiation levels or distances in space, they will also incur errors. This will be a larger risk when astronauts are less able to rely on human mission control experts and are out of touch, far away, and on their own. The SRP also identified 11 new gaps and five new tasks. Although the SRP had an extremely large quantity of reading material prior to and during the meeting, we still did not feel we had an overview of the activities and tasks the astronauts would be performing in exploration missions. Without a detailed task analysis and taxonomy of activities the humans would be engaged in, we felt it was impossible to know whether the gaps and tasks were really sufficient to insure human safety, performance, and comfort in the exploration missions. The SRP had difficulty evaluating many of the gaps and tasks that were not as quantitative as those related to concrete physical danger such as excessive noise and vibration. Often the research tasks for cognitive risks that accompany poor task or information design addressed only part, but not all, of the gaps they were programmed to fill. In fact the tasks outlined will not close the gap but only scratch the surface in many cases. In other cases, the gap was written too broadly, and really should be restated in a more constrained way that can be addressed by a well-organized and complementary set of tasks. In many cases, the research results should be turned into guidelines for design. However, it was not clear whether the researchers or another group would construct and deliver these guidelines.
The relevance of error analysis in graphical symbols evaluation.
Piamonte, D P
1999-01-01
In an increasing number of modern tools and devices, small graphical symbols appear simultaneously in sets as parts of the human-machine interfaces. The presence of each symbol can influence the other's recognizability and correct association to its intended referents. Thus, aside from correct associations, it is equally important to perform certain error analysis of the wrong answers, misses, confusions, and even lack of answers. This research aimed to show how such error analyses could be valuable in evaluating graphical symbols especially across potentially different user groups. The study tested 3 sets of icons representing 7 videophone functions. The methods involved parameters such as hits, confusions, missing values, and misses. The association tests showed similar hit rates of most symbols across the majority of the participant groups. However, exploring the error patterns helped detect differences in the graphical symbols' performances between participant groups, which otherwise seemed to have similar levels of recognition. These are very valuable not only in determining the symbols to be retained, replaced or re-designed, but also in formulating instructions and other aids in learning to use new products faster and more satisfactorily.
Chaplain Corps Cadet Chapel Community Center Chapel Institutional Review Board Not Human Subjects Research Requirements 7 Not Human Subjects Research Form 8 Researcher Instructions - Activities Submitted to DoD IRB 9 Review 18 Not Human Subjects Errors 19 Exempt Research Most Frequent Errors 20 Most Frequent Errors for
Development of an FAA-EUROCONTROL technique for the analysis of human error in ATM : final report.
DOT National Transportation Integrated Search
2002-07-01
Human error has been identified as a dominant risk factor in safety-oriented industries such as air traffic control (ATC). However, little is known about the factors leading to human errors in current air traffic management (ATM) systems. The first s...
Human Error: The Stakes Are Raised.
ERIC Educational Resources Information Center
Greenberg, Joel
1980-01-01
Mistakes related to the operation of nuclear power plants and other technologically complex systems are discussed. Recommendations are given for decreasing the chance of human error in the operation of nuclear plants. The causes of the Three Mile Island incident are presented in terms of the human error element. (SA)
Avoiding Human Error in Mission Operations: Cassini Flight Experience
NASA Technical Reports Server (NTRS)
Burk, Thomas A.
2012-01-01
Operating spacecraft is a never-ending challenge and the risk of human error is ever- present. Many missions have been significantly affected by human error on the part of ground controllers. The Cassini mission at Saturn has not been immune to human error, but Cassini operations engineers use tools and follow processes that find and correct most human errors before they reach the spacecraft. What is needed are skilled engineers with good technical knowledge, good interpersonal communications, quality ground software, regular peer reviews, up-to-date procedures, as well as careful attention to detail and the discipline to test and verify all commands that will be sent to the spacecraft. Two areas of special concern are changes to flight software and response to in-flight anomalies. The Cassini team has a lot of practical experience in all these areas and they have found that well-trained engineers with good tools who follow clear procedures can catch most errors before they get into command sequences to be sent to the spacecraft. Finally, having a robust and fault-tolerant spacecraft that allows ground controllers excellent visibility of its condition is the most important way to ensure human error does not compromise the mission.
Using a Hybrid Model to Forecast the Prevalence of Schistosomiasis in Humans.
Zhou, Lingling; Xia, Jing; Yu, Lijing; Wang, Ying; Shi, Yun; Cai, Shunxiang; Nie, Shaofa
2016-03-23
We previously proposed a hybrid model combining both the autoregressive integrated moving average (ARIMA) and the nonlinear autoregressive neural network (NARNN) models in forecasting schistosomiasis. Our purpose in the current study was to forecast the annual prevalence of human schistosomiasis in Yangxin County, using our ARIMA-NARNN model, thereby further certifying the reliability of our hybrid model. We used the ARIMA, NARNN and ARIMA-NARNN models to fit and forecast the annual prevalence of schistosomiasis. The modeling time range included was the annual prevalence from 1956 to 2008 while the testing time range included was from 2009 to 2012. The mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) were used to measure the model performance. We reconstructed the hybrid model to forecast the annual prevalence from 2013 to 2016. The modeling and testing errors generated by the ARIMA-NARNN model were lower than those obtained from either the single ARIMA or NARNN models. The predicted annual prevalence from 2013 to 2016 demonstrated an initial decreasing trend, followed by an increase. The ARIMA-NARNN model can be well applied to analyze surveillance data for early warning systems for the control and elimination of schistosomiasis.
Using a Delphi Method to Identify Human Factors Contributing to Nursing Errors.
Roth, Cheryl; Brewer, Melanie; Wieck, K Lynn
2017-07-01
The purpose of this study was to identify human factors associated with nursing errors. Using a Delphi technique, this study used feedback from a panel of nurse experts (n = 25) on an initial qualitative survey questionnaire followed by summarizing the results with feedback and confirmation. Synthesized factors regarding causes of errors were incorporated into a quantitative Likert-type scale, and the original expert panel participants were queried a second time to validate responses. The list identified 24 items as most common causes of nursing errors, including swamping and errors made by others that nurses are expected to recognize and fix. The responses provided a consensus top 10 errors list based on means with heavy workload and fatigue at the top of the list. The use of the Delphi survey established consensus and developed a platform upon which future study of nursing errors can evolve as a link to future solutions. This list of human factors in nursing errors should serve to stimulate dialogue among nurses about how to prevent errors and improve outcomes. Human and system failures have been the subject of an abundance of research, yet nursing errors continue to occur. © 2016 Wiley Periodicals, Inc.
Probabilistic simulation of the human factor in structural reliability
NASA Astrophysics Data System (ADS)
Chamis, Christos C.; Singhal, Surendra N.
1994-09-01
The formal approach described herein computationally simulates the probable ranges of uncertainties for the human factor in probabilistic assessments of structural reliability. Human factors such as marital status, professional status, home life, job satisfaction, work load, and health are studied by using a multifactor interaction equation (MFIE) model to demonstrate the approach. Parametric studies in conjunction with judgment are used to select reasonable values for the participating factors (primitive variables). Subsequently performed probabilistic sensitivity studies assess the suitability of the MFIE as well as the validity of the whole approach. Results show that uncertainties range from 5 to 30 percent for the most optimistic case, assuming 100 percent for no error (perfect performance).
Probabilistic Simulation of the Human Factor in Structural Reliability
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Singhal, Surendra N.
1994-01-01
The formal approach described herein computationally simulates the probable ranges of uncertainties for the human factor in probabilistic assessments of structural reliability. Human factors such as marital status, professional status, home life, job satisfaction, work load, and health are studied by using a multifactor interaction equation (MFIE) model to demonstrate the approach. Parametric studies in conjunction with judgment are used to select reasonable values for the participating factors (primitive variables). Subsequently performed probabilistic sensitivity studies assess the suitability of the MFIE as well as the validity of the whole approach. Results show that uncertainties range from 5 to 30 percent for the most optimistic case, assuming 100 percent for no error (perfect performance).
Bayesian network models for error detection in radiotherapy plans
NASA Astrophysics Data System (ADS)
Kalet, Alan M.; Gennari, John H.; Ford, Eric C.; Phillips, Mark H.
2015-04-01
The purpose of this study is to design and develop a probabilistic network for detecting errors in radiotherapy plans for use at the time of initial plan verification. Our group has initiated a multi-pronged approach to reduce these errors. We report on our development of Bayesian models of radiotherapy plans. Bayesian networks consist of joint probability distributions that define the probability of one event, given some set of other known information. Using the networks, we find the probability of obtaining certain radiotherapy parameters, given a set of initial clinical information. A low probability in a propagated network then corresponds to potential errors to be flagged for investigation. To build our networks we first interviewed medical physicists and other domain experts to identify the relevant radiotherapy concepts and their associated interdependencies and to construct a network topology. Next, to populate the network’s conditional probability tables, we used the Hugin Expert software to learn parameter distributions from a subset of de-identified data derived from a radiation oncology based clinical information database system. These data represent 4990 unique prescription cases over a 5 year period. Under test case scenarios with approximately 1.5% introduced error rates, network performance produced areas under the ROC curve of 0.88, 0.98, and 0.89 for the lung, brain and female breast cancer error detection networks, respectively. Comparison of the brain network to human experts performance (AUC of 0.90 ± 0.01) shows the Bayes network model performs better than domain experts under the same test conditions. Our results demonstrate the feasibility and effectiveness of comprehensive probabilistic models as part of decision support systems for improved detection of errors in initial radiotherapy plan verification procedures.
The effect of divided attention on novices and experts in laparoscopic task performance.
Ghazanfar, Mudassar Ali; Cook, Malcolm; Tang, Benjie; Tait, Iain; Alijani, Afshin
2015-03-01
Attention is important for the skilful execution of surgery. The surgeon's attention during surgery is divided between surgery and outside distractions. The effect of this divided attention has not been well studied previously. We aimed to compare the effect of dividing attention of novices and experts on a laparoscopic task performance. Following ethical approval, 25 novices and 9 expert surgeons performed a standardised peg transfer task in a laboratory setup under three randomly assigned conditions: silent as control condition and two standardised auditory distracting tasks requiring response (easy and difficult) as study conditions. Human reliability assessment was used for surgical task analysis. Primary outcome measures were correct auditory responses, task time, number of surgical errors and instrument movements. Secondary outcome measures included error rate, error probability and hand specific differences. Non-parametric statistics were used for data analysis. 21109 movements and 9036 total errors were analysed. Novices had increased mean task completion time (seconds) (171 ± 44SD vs. 149 ± 34, p < 0.05), number of total movements (227 ± 27 vs. 213 ± 26, p < 0.05) and number of errors (127 ± 51 vs. 96 ± 28, p < 0.05) during difficult study conditions compared to control. The correct responses to auditory stimuli were less frequent in experts (68 %) compared to novices (80 %). There was a positive correlation between error rate and error probability in novices (r (2) = 0.533, p < 0.05) but not in experts (r (2) = 0.346, p > 0.05). Divided attention conditions in theatre environment require careful consideration during surgical training as the junior surgeons are less able to focus their attention during these conditions.
Parrish, Audrey E; Agrillo, Christian; Perdue, Bonnie M; Beran, Michael J
2016-02-01
One approach to gaining a better understanding of how we perceive the world is to assess the errors that human and nonhuman animals make in perceptual processing. Developmental and comparative perspectives can contribute to identifying the mechanisms that underlie systematic perceptual errors often referred to as perceptual illusions. In the visual domain, some illusions appear to remain constant across the lifespan, whereas others change with age. From a comparative perspective, many of the illusions observed in humans appear to be shared with nonhuman primates. Numerosity illusions are a subset of visual illusions and occur when the spatial arrangement of stimuli within a set influences the perception of quantity. Previous research has found one such illusion that readily occurs in human adults, the Solitaire illusion. This illusion appears to be less robust in two monkey species, rhesus macaques and capuchin monkeys. We attempted to clarify the ontogeny of this illusion from a developmental and comparative perspective by testing human children and task-naïve capuchin monkeys in a computerized quantity judgment task. The overall performance of the monkeys suggested that they perceived the numerosity illusion, although there were large differences among individuals. Younger children performed similarly to the monkeys, whereas older children more consistently perceived the illusion. These findings suggest that human-unique perceptual experiences with the world might play an important role in the emergence of the Solitaire illusion in human adults, although other factors also may contribute. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Gore, Brian F.
2011-01-01
As automation and advanced technologies are introduced into transport systems ranging from the Next Generation Air Transportation System termed NextGen, to the advanced surface transportation systems as exemplified by the Intelligent Transportations Systems, to future systems designed for space exploration, there is an increased need to validly predict how the future systems will be vulnerable to error given the demands imposed by the assistive technologies. One formalized approach to study the impact of assistive technologies on the human operator in a safe and non-obtrusive manner is through the use of human performance models (HPMs). HPMs play an integral role when complex human-system designs are proposed, developed, and tested. One HPM tool termed the Man-machine Integration Design and Analysis System (MIDAS) is a NASA Ames Research Center HPM software tool that has been applied to predict human-system performance in various domains since 1986. MIDAS is a dynamic, integrated HPM and simulation environment that facilitates the design, visualization, and computational evaluation of complex man-machine system concepts in simulated operational environments. The paper will discuss a range of aviation specific applications including an approach used to model human error for NASA s Aviation Safety Program, and what-if analyses to evaluate flight deck technologies for NextGen operations. This chapter will culminate by raising two challenges for the field of predictive HPMs for complex human-system designs that evaluate assistive technologies: that of (1) model transparency and (2) model validation.
Yilmaz, Bilal; Asci, Ali; Kucukoglu, Kaan; Albayrak, Mevlut
2016-08-01
A simple high-performance liquid chromatography method has been developed for the determination of formaldehyde in human tissue. FA Formaldehyde was derivatized with 2,4-dinitrophenylhydrazine. It was extracted from human tissue with ethyl acetate by liquid-liquid extraction and analyzed by high-performance liquid chromatography. The calibration curve was linear in the concentration range of 5.0-200 μg/mL. Intra- and interday precision values for formaldehyde in tissue were <6.9%, and accuracy (relative error) was better than 6.5%. The extraction recoveries of formaldehyde from human tissue were between 88 and 98%. The limits of detection and quantification of formaldehyde were 1.5 and 5.0 μg/mL, respectively. Also, this assay was applied to liver samples taken from a biopsy material. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Pedestrian dead reckoning employing simultaneous activity recognition cues
NASA Astrophysics Data System (ADS)
Altun, Kerem; Barshan, Billur
2012-02-01
We consider the human localization problem using body-worn inertial/magnetic sensor units. Inertial sensors are characterized by a drift error caused by the integration of their rate output to obtain position information. Because of this drift, the position and orientation data obtained from inertial sensors are reliable over only short periods of time. Therefore, position updates from externally referenced sensors are essential. However, if the map of the environment is known, the activity context of the user can provide information about his position. In particular, the switches in the activity context correspond to discrete locations on the map. By performing localization simultaneously with activity recognition, we detect the activity context switches and use the corresponding position information as position updates in a localization filter. The localization filter also involves a smoother that combines the two estimates obtained by running the zero-velocity update algorithm both forward and backward in time. We performed experiments with eight subjects in indoor and outdoor environments involving walking, turning and standing activities. Using a spatial error criterion, we show that the position errors can be decreased by about 85% on the average. We also present the results of two 3D experiments performed in realistic indoor environments and demonstrate that it is possible to achieve over 90% error reduction in position by performing localization simultaneously with activity recognition.
From feedback- to response-based performance monitoring in active and observational learning.
Bellebaum, Christian; Colosio, Marco
2014-09-01
Humans can adapt their behavior by learning from the consequences of their own actions or by observing others. Gradual active learning of action-outcome contingencies is accompanied by a shift from feedback- to response-based performance monitoring. This shift is reflected by complementary learning-related changes of two ACC-driven ERP components, the feedback-related negativity (FRN) and the error-related negativity (ERN), which have both been suggested to signal events "worse than expected," that is, a negative prediction error. Although recent research has identified comparable components for observed behavior and outcomes (observational ERN and FRN), it is as yet unknown, whether these components are similarly modulated by prediction errors and thus also reflect behavioral adaptation. In this study, two groups of 15 participants learned action-outcome contingencies either actively or by observation. In active learners, FRN amplitude for negative feedback decreased and ERN amplitude in response to erroneous actions increased with learning, whereas observational ERN and FRN in observational learners did not exhibit learning-related changes. Learning performance, assessed in test trials without feedback, was comparable between groups, as was the ERN following actively performed errors during test trials. In summary, the results show that action-outcome associations can be learned similarly well actively and by observation. The mechanisms involved appear to differ, with the FRN in active learning reflecting the integration of information about own actions and the accompanying outcomes.
Analysis of measured data of human body based on error correcting frequency
NASA Astrophysics Data System (ADS)
Jin, Aiyan; Peipei, Gao; Shang, Xiaomei
2014-04-01
Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.
A gunner model for an AAA tracking task with interrupted observations
NASA Technical Reports Server (NTRS)
Yu, C. F.; Wei, K. C.; Vikmanis, M.
1982-01-01
The problem of modeling a trained human operator's tracking performance in an anti-aircraft system under various display blanking conditions is discussed. The input to the gunner is the observable tracking error subjected to repeated interruptions (blanking). A simple and effective gunner model was developed. The effect of blanking on the gunner's tracking performance is approached via modeling the observer and controller gains.
Sustained attention performance during sleep deprivation: evidence of state instability
NASA Technical Reports Server (NTRS)
Doran, S. M.; Van Dongen, H. P.; Dinges, D. F.
2001-01-01
Nathaniel Kleitman was the first to observe that sleep deprivation in humans did not eliminate the ability to perform neurobehavioral functions, but it did make it difficult to maintain stable performance for more than a few minutes. To investigate variability in performance as a function of sleep deprivation, n = 13 subjects were tested every 2 hours on a 10-minute, sustained-attention, psychomotor vigilance task (PVT) throughout 88 hours of total sleep deprivation (TSD condition), and compared to a control group of n = 15 subjects who were permitted a 2-hour nap every 12 hours (NAP condition) throughout the 88-hour period. PVT reaction time means and standard deviations increased markedly among subjects and within each individual subject in the TSD condition relative to the NAP condition. TSD subjects also had increasingly greater performance variability as a function of time on task after 18 hours of wakefulness. During sleep deprivation, variability in PVT performance reflected a combination of normal timely responses, errors of omission (i.e., lapses), and errors of commission (i.e., responding when no stimulus was present). Errors of omission and errors of commission were highly intercorrelated across deprivation in the TSD condition (r = 0.85, p = 0.0001), suggesting that performance instability is more likely to include compensatory effort than a lack of motivation. The marked increases in PVT performance variability as sleep loss continued supports the "state instability" hypothesis, which posits that performance during sleep deprivation is increasingly variable due to the influence of sleep initiating mechanisms on the endogenous capacity to maintain attention and alertness, thereby creating an unstable state that fluctuates within seconds and that cannot be characterized as either fully awake or asleep.
Comparison of phasing strategies for whole human genomes
Kirkness, Ewen; Schork, Nicholas J.
2018-01-01
Humans are a diploid species that inherit one set of chromosomes paternally and one homologous set of chromosomes maternally. Unfortunately, most human sequencing initiatives ignore this fact in that they do not directly delineate the nucleotide content of the maternal and paternal copies of the 23 chromosomes individuals possess (i.e., they do not ‘phase’ the genome) often because of the costs and complexities of doing so. We compared 11 different widely-used approaches to phasing human genomes using the publicly available ‘Genome-In-A-Bottle’ (GIAB) phased version of the NA12878 genome as a gold standard. The phasing strategies we compared included laboratory-based assays that prepare DNA in unique ways to facilitate phasing as well as purely computational approaches that seek to reconstruct phase information from general sequencing reads and constructs or population-level haplotype frequency information obtained through a reference panel of haplotypes. To assess the performance of the 11 approaches, we used metrics that included, among others, switch error rates, haplotype block lengths, the proportion of fully phase-resolved genes, phasing accuracy and yield between pairs of SNVs. Our comparisons suggest that a hybrid or combined approach that leverages: 1. population-based phasing using the SHAPEIT software suite, 2. either genome-wide sequencing read data or parental genotypes, and 3. a large reference panel of variant and haplotype frequencies, provides a fast and efficient way to produce highly accurate phase-resolved individual human genomes. We found that for population-based approaches, phasing performance is enhanced with the addition of genome-wide read data; e.g., whole genome shotgun and/or RNA sequencing reads. Further, we found that the inclusion of parental genotype data within a population-based phasing strategy can provide as much as a ten-fold reduction in phasing errors. We also considered a majority voting scheme for the construction of a consensus haplotype combining multiple predictions for enhanced performance and site coverage. Finally, we also identified DNA sequence signatures associated with the genomic regions harboring phasing switch errors, which included regions of low polymorphism or SNV density. PMID:29621242
"First, know thyself": cognition and error in medicine.
Elia, Fabrizio; Aprà, Franco; Verhovez, Andrea; Crupi, Vincenzo
2016-04-01
Although error is an integral part of the world of medicine, physicians have always been little inclined to take into account their own mistakes and the extraordinary technological progress observed in the last decades does not seem to have resulted in a significant reduction in the percentage of diagnostic errors. The failure in the reduction in diagnostic errors, notwithstanding the considerable investment in human and economic resources, has paved the way to new strategies which were made available by the development of cognitive psychology, the branch of psychology that aims at understanding the mechanisms of human reasoning. This new approach led us to realize that we are not fully rational agents able to take decisions on the basis of logical and probabilistically appropriate evaluations. In us, two different and mostly independent modes of reasoning coexist: a fast or non-analytical reasoning, which tends to be largely automatic and fast-reactive, and a slow or analytical reasoning, which permits to give rationally founded answers. One of the features of the fast mode of reasoning is the employment of standardized rules, termed "heuristics." Heuristics lead physicians to correct choices in a large percentage of cases. Unfortunately, cases exist wherein the heuristic triggered fails to fit the target problem, so that the fast mode of reasoning can lead us to unreflectively perform actions exposing us and others to variable degrees of risk. Cognitive errors arise as a result of these cases. Our review illustrates how cognitive errors can cause diagnostic problems in clinical practice.
1980-08-15
Instructional system, Including the works of Burton & -77 -;:-7 Final Report 3951 Brown (1979), Miller (1979), Goldstein (1979), and Stevens and Collins...rea-daAm i_=E1jJh. Providence, R.I.: Brown University Press, 1967. LaBerge , D., & Samuels, S. J. Toward a theory of automatic information processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeffrey C. Joe; Diego Mandelli; Ronald L. Boring
2015-07-01
The United States Department of Energy is sponsoring the Light Water Reactor Sustainability program, which has the overall objective of supporting the near-term and the extended operation of commercial nuclear power plants. One key research and development (R&D) area in this program is the Risk-Informed Safety Margin Characterization pathway, which combines probabilistic risk simulation with thermohydraulic simulation codes to define and manage safety margins. The R&D efforts to date, however, have not included robust simulations of human operators, and how the reliability of human performance or lack thereof (i.e., human errors) can affect risk-margins and plant performance. This paper describesmore » current and planned research efforts to address the absence of robust human reliability simulations and thereby increase the fidelity of simulated accident scenarios.« less
Schleier, Jerome J.; Peterson, Robert K.D.; Irvine, Kathryn M.; Marshall, Lucy M.; Weaver, David K.; Preftakes, Collin J.
2012-01-01
One of the more effective ways of managing high densities of adult mosquitoes that vector human and animal pathogens is ultra-low-volume (ULV) aerosol applications of insecticides. The U.S. Environmental Protection Agency uses models that are not validated for ULV insecticide applications and exposure assumptions to perform their human and ecological risk assessments. Currently, there is no validated model that can accurately predict deposition of insecticides applied using ULV technology for adult mosquito management. In addition, little is known about the deposition and drift of small droplets like those used under conditions encountered during ULV applications. The objective of this study was to perform field studies to measure environmental concentrations of insecticides and to develop a validated model to predict the deposition of ULV insecticides. The final regression model was selected by minimizing the Bayesian Information Criterion and its prediction performance was evaluated using k-fold cross validation. Density of the formulation and the density and CMD interaction coefficients were the largest in the model. The results showed that as density of the formulation decreases, deposition increases. The interaction of density and CMD showed that higher density formulations and larger droplets resulted in greater deposition. These results are supported by the aerosol physics literature. A k-fold cross validation demonstrated that the mean square error of the selected regression model is not biased, and the mean square error and mean square prediction error indicated good predictive ability.
Tailoring a Human Reliability Analysis to Your Industry Needs
NASA Technical Reports Server (NTRS)
DeMott, D. L.
2016-01-01
Companies at risk of accidents caused by human error that result in catastrophic consequences include: airline industry mishaps, medical malpractice, medication mistakes, aerospace failures, major oil spills, transportation mishaps, power production failures and manufacturing facility incidents. Human Reliability Assessment (HRA) is used to analyze the inherent risk of human behavior or actions introducing errors into the operation of a system or process. These assessments can be used to identify where errors are most likely to arise and the potential risks involved if they do occur. Using the basic concepts of HRA, an evolving group of methodologies are used to meet various industry needs. Determining which methodology or combination of techniques will provide a quality human reliability assessment is a key element to developing effective strategies for understanding and dealing with risks caused by human errors. There are a number of concerns and difficulties in "tailoring" a Human Reliability Assessment (HRA) for different industries. Although a variety of HRA methodologies are available to analyze human error events, determining the most appropriate tools to provide the most useful results can depend on industry specific cultures and requirements. Methodology selection may be based on a variety of factors that include: 1) how people act and react in different industries, 2) expectations based on industry standards, 3) factors that influence how the human errors could occur such as tasks, tools, environment, workplace, support, training and procedure, 4) type and availability of data, 5) how the industry views risk & reliability, and 6) types of emergencies, contingencies and routine tasks. Other considerations for methodology selection should be based on what information is needed from the assessment. If the principal concern is determination of the primary risk factors contributing to the potential human error, a more detailed analysis method may be employed versus a requirement to provide a numerical value as part of a probabilistic risk assessment. Industries involved with humans operating large equipment or transport systems (ex. railroads or airlines) would have more need to address the man machine interface than medical workers administering medications. Human error occurs in every industry; in most cases the consequences are relatively benign and occasionally beneficial. In cases where the results can have disastrous consequences, the use of Human Reliability techniques to identify and classify the risk of human errors allows a company more opportunities to mitigate or eliminate these types of risks and prevent costly tragedies.
NASA Astrophysics Data System (ADS)
Ignac-Nowicka, Jolanta
2018-03-01
The paper analyzes the conditions of safe use of industrial gas systems and factors influencing gas hazards. Typical gas installation and its basic features have been characterized. The results of gas threat analysis in an industrial enterprise using FTA error tree method and ETA event tree method are presented. Compares selected methods of identifying hazards gas industry with respect to the scope of their use. The paper presents an analysis of two exemplary hazards: an industrial gas catastrophe (FTA) and an explosive gas explosion (ETA). In both cases, technical risks and human errors (human factor) were taken into account. The cause-effect relationships of hazards and their causes are presented in the form of diagrams in the drawings.
Using APEX to Model Anticipated Human Error: Analysis of a GPS Navigational Aid
NASA Technical Reports Server (NTRS)
VanSelst, Mark; Freed, Michael; Shefto, Michael (Technical Monitor)
1997-01-01
The interface development process can be dramatically improved by predicting design facilitated human error at an early stage in the design process. The approach we advocate is to SIMULATE the behavior of a human agent carrying out tasks with a well-specified user interface, ANALYZE the simulation for instances of human error, and then REFINE the interface or protocol to minimize predicted error. This approach, incorporated into the APEX modeling architecture, differs from past approaches to human simulation in Its emphasis on error rather than e.g. learning rate or speed of response. The APEX model consists of two major components: (1) a powerful action selection component capable of simulating behavior in complex, multiple-task environments; and (2) a resource architecture which constrains cognitive, perceptual, and motor capabilities to within empirically demonstrated limits. The model mimics human errors arising from interactions between limited human resources and elements of the computer interface whose design falls to anticipate those limits. We analyze the design of a hand-held Global Positioning System (GPS) device used for radical and navigational decisions in small yacht recalls. The analysis demonstrates how human system modeling can be an effective design aid, helping to accelerate the process of refining a product (or procedure).
Decision Making In A High-Tech World: Automation Bias and Countermeasures
NASA Technical Reports Server (NTRS)
Mosier, Kathleen L.; Skitka, Linda J.; Burdick, Mark R.; Heers, Susan T.; Rosekind, Mark R. (Technical Monitor)
1996-01-01
Automated decision aids and decision support systems have become essential tools in many high-tech environments. In aviation, for example, flight management systems computers not only fly the aircraft, but also calculate fuel efficient paths, detect and diagnose system malfunctions and abnormalities, and recommend or carry out decisions. Air Traffic Controllers will soon be utilizing decision support tools to help them predict and detect potential conflicts and to generate clearances. Other fields as disparate as nuclear power plants and medical diagnostics are similarly becoming more and more automated. Ideally, the combination of human decision maker and automated decision aid should result in a high-performing team, maximizing the advantages of additional cognitive and observational power in the decision-making process. In reality, however, the presence of these aids often short-circuits the way that even very experienced decision makers have traditionally handled tasks and made decisions, and introduces opportunities for new decision heuristics and biases. Results of recent research investigating the use of automated aids have indicated the presence of automation bias, that is, errors made when decision makers rely on automated cues as a heuristic replacement for vigilant information seeking and processing. Automation commission errors, i.e., errors made when decision makers inappropriately follow an automated directive, or automation omission errors, i.e., errors made when humans fail to take action or notice a problem because an automated aid fails to inform them, can result from this tendency. Evidence of the tendency to make automation-related omission and commission errors has been found in pilot self reports, in studies using pilots in flight simulations, and in non-flight decision making contexts with student samples. Considerable research has found that increasing social accountability can successfully ameliorate a broad array of cognitive biases and resultant errors. To what extent these effects generalize to performance situations is not yet empirically established. The two studies to be presented represent concurrent efforts, with student and professional pilot samples, to determine the effects of accountability pressures on automation bias and on the verification of the accurate functioning of automated aids. Students (Experiment 1) and commercial pilots (Experiment 2) performed simulated flight tasks using automated aids. In both studies, participants who perceived themselves as accountable for their strategies of interaction with the automation were significantly more likely to verify its correctness, and committed significantly fewer automation-related errors than those who did not report this perception.
Data Comm Flight Deck Human-in-the-Loop Simulation
NASA Technical Reports Server (NTRS)
Lozito, Sandra; Martin, Lynne Hazel; Sharma, Shivanjli; Kaneshige, John T.; Dulchinos, Victoria
2012-01-01
This presentation discusses an upcoming simulation for data comm in the terminal area. The purpose of the presentation is to provide the REDAC committee with a summary of some of the work in Data Comm that is being sponsored by the FAA. The focus of the simulation is upon flight crew human performance variables, such as crew procedures, timing and errors. The simulation is scheduled to be conducted in Sept 2012.
Emken, Jeremy L; Benitez, Raul; Reinkensmeyer, David J
2007-01-01
Background A prevailing paradigm of physical rehabilitation following neurologic injury is to "assist-as-needed" in completing desired movements. Several research groups are attempting to automate this principle with robotic movement training devices and patient cooperative algorithms that encourage voluntary participation. These attempts are currently not based on computational models of motor learning. Methods Here we assume that motor recovery from a neurologic injury can be modelled as a process of learning a novel sensory motor transformation, which allows us to study a simplified experimental protocol amenable to mathematical description. Specifically, we use a robotic force field paradigm to impose a virtual impairment on the left leg of unimpaired subjects walking on a treadmill. We then derive an "assist-as-needed" robotic training algorithm to help subjects overcome the virtual impairment and walk normally. The problem is posed as an optimization of performance error and robotic assistance. The optimal robotic movement trainer becomes an error-based controller with a forgetting factor that bounds kinematic errors while systematically reducing its assistance when those errors are small. As humans have a natural range of movement variability, we introduce an error weighting function that causes the robotic trainer to disregard this variability. Results We experimentally validated the controller with ten unimpaired subjects by demonstrating how it helped the subjects learn the novel sensory motor transformation necessary to counteract the virtual impairment, while also preventing them from experiencing large kinematic errors. The addition of the error weighting function allowed the robot assistance to fade to zero even though the subjects' movements were variable. We also show that in order to assist-as-needed, the robot must relax its assistance at a rate faster than that of the learning human. Conclusion The assist-as-needed algorithm proposed here can limit error during the learning of a dynamic motor task. The algorithm encourages learning by decreasing its assistance as a function of the ongoing progression of movement error. This type of algorithm is well suited for helping people learn dynamic tasks for which large kinematic errors are dangerous or discouraging, and thus may prove useful for robot-assisted movement training of walking or reaching following neurologic injury. PMID:17391527
Rice, Stephen; McCarley, Jason S
2011-12-01
Automated diagnostic aids prone to false alarms often produce poorer human performance in signal detection tasks than equally reliable miss-prone aids. However, it is not yet clear whether this is attributable to differences in the perceptual salience of the automated aids' misses and false alarms or is the result of inherent differences in operators' cognitive responses to different forms of automation error. The present experiments therefore examined the effects of automation false alarms and misses on human performance under conditions in which the different forms of error were matched in their perceptual characteristics. Young adult participants performed a simulated baggage x-ray screening task while assisted by an automated diagnostic aid. Judgments from the aid were rendered as text messages presented at the onset of each trial, and every trial was followed by a second text message providing response feedback. Thus, misses and false alarms from the aid were matched for their perceptual salience. Experiment 1 found that even under these conditions, false alarms from the aid produced poorer human performance and engendered lower automation use than misses from the aid. Experiment 2, however, found that the asymmetry between misses and false alarms was reduced when the aid's false alarms were framed as neutral messages rather than explicit misjudgments. Results suggest that automation false alarms and misses differ in their inherent cognitive salience and imply that changes in diagnosis framing may allow designers to encourage better use of imperfectly reliable automated aids.
Hooper, Brionny J; O'Hare, David P A
2013-08-01
Human error classification systems theoretically allow researchers to analyze postaccident data in an objective and consistent manner. The Human Factors Analysis and Classification System (HFACS) framework is one such practical analysis tool that has been widely used to classify human error in aviation. The Cognitive Error Taxonomy (CET) is another. It has been postulated that the focus on interrelationships within HFACS can facilitate the identification of the underlying causes of pilot error. The CET provides increased granularity at the level of unsafe acts. The aim was to analyze the influence of factors at higher organizational levels on the unsafe acts of front-line operators and to compare the errors of fixed-wing and rotary-wing operations. This study analyzed 288 aircraft incidents involving human error from an Australasian military organization occurring between 2001 and 2008. Action errors accounted for almost twice (44%) the proportion of rotary wing compared to fixed wing (23%) incidents. Both classificatory systems showed significant relationships between precursor factors such as the physical environment, mental and physiological states, crew resource management, training and personal readiness, and skill-based, but not decision-based, acts. The CET analysis showed different predisposing factors for different aspects of skill-based behaviors. Skill-based errors in military operations are more prevalent in rotary wing incidents and are related to higher level supervisory processes in the organization. The Cognitive Error Taxonomy provides increased granularity to HFACS analyses of unsafe acts.
Understanding reliance on automation: effects of error type, error distribution, age and experience
Sanchez, Julian; Rogers, Wendy A.; Fisk, Arthur D.; Rovira, Ericka
2015-01-01
An obstacle detection task supported by “imperfect” automation was used with the goal of understanding the effects of automation error types and age on automation reliance. Sixty younger and sixty older adults interacted with a multi-task simulation of an agricultural vehicle (i.e. a virtual harvesting combine). The simulator included an obstacle detection task and a fully manual tracking task. A micro-level analysis provided insight into the way reliance patterns change over time. The results indicated that there are distinct patterns of reliance that develop as a function of error type. A prevalence of automation false alarms led participants to under-rely on the automation during alarm states while over relying on it during non-alarms states. Conversely, a prevalence of automation misses led participants to over-rely on automated alarms and under-rely on the automation during non-alarm states. Older adults adjusted their behavior according to the characteristics of the automation similarly to younger adults, although it took them longer to do so. The results of this study suggest the relationship between automation reliability and reliance depends on the prevalence of specific errors and on the state of the system. Understanding the effects of automation detection criterion settings on human-automation interaction can help designers of automated systems make predictions about human behavior and system performance as a function of the characteristics of the automation. PMID:25642142
Peripheral refraction and image blur in four meridians in emmetropes and myopes.
Shen, Jie; Spors, Frank; Egan, Donald; Liu, Chunming
2018-01-01
The peripheral refractive error of the human eye has been hypothesized to be a major stimulus for the development of its central refractive error. The purpose of this study was to investigate the changes in the peripheral refractive error across horizontal, vertical and two diagonal meridians in emmetropic and low, moderate and high myopic adults. Thirty-four adult subjects were recruited and aberration was measured using a modified commercial aberrometer. We then computed the refractive error in power vector notation from second-order Zernike terms. Statistical analysis was performed to evaluate the statistical differences in refractive error profiles between the subject groups and across all measured visual field meridians. Small amounts of relative myopic shift were observed in emmetropic and low myopic subjects. However, moderate and high myopic subjects exhibited a relative hyperopic shift in all four meridians. Astigmatism J 0 and J 45 had quadratic or linear changes dependent on the visual field meridians. Peripheral Sphero-Cylindrical Retinal Image Blur increased in emmetropic eyes in most of the measured visual fields. The findings indicate an overall emmetropic or slightly relative myopic periphery (spherical or oblate retinal shape) formed in emmetropes and low myopes, while moderate and high myopes form relative hyperopic periphery (prolate, or less oblate, retinal shape). In general, human emmetropic eyes demonstrate higher amount of peripheral retinal image blur.
Understanding reliance on automation: effects of error type, error distribution, age and experience.
Sanchez, Julian; Rogers, Wendy A; Fisk, Arthur D; Rovira, Ericka
2014-03-01
An obstacle detection task supported by "imperfect" automation was used with the goal of understanding the effects of automation error types and age on automation reliance. Sixty younger and sixty older adults interacted with a multi-task simulation of an agricultural vehicle (i.e. a virtual harvesting combine). The simulator included an obstacle detection task and a fully manual tracking task. A micro-level analysis provided insight into the way reliance patterns change over time. The results indicated that there are distinct patterns of reliance that develop as a function of error type. A prevalence of automation false alarms led participants to under-rely on the automation during alarm states while over relying on it during non-alarms states. Conversely, a prevalence of automation misses led participants to over-rely on automated alarms and under-rely on the automation during non-alarm states. Older adults adjusted their behavior according to the characteristics of the automation similarly to younger adults, although it took them longer to do so. The results of this study suggest the relationship between automation reliability and reliance depends on the prevalence of specific errors and on the state of the system. Understanding the effects of automation detection criterion settings on human-automation interaction can help designers of automated systems make predictions about human behavior and system performance as a function of the characteristics of the automation.
NASA Technical Reports Server (NTRS)
Mock, Jessica
2005-01-01
Stress, fatigue, and workload affect worker performance. NSF reported that 61% of respondents state losing concentration at work while 79% occasionally or frequently made errors as a result of being fatigued. Shift work, altered work schedules, long hours of continuous wakefulness, and sleep loss can create sleep and circadian disruptions that degrade waking fundions causing stress and fatigue. Review of the literature has proven void of information that links the combined effects of fatigue, stress, and workload to human performance. This paper will address which occupational factors within stress, fatigue, and workload were identified as occupational contributors to performance changes. The results of this research will be apglied to underlying models and algorithms that will help predict performance changes in control room operators.
Kell, Alexander J E; Yamins, Daniel L K; Shook, Erica N; Norman-Haignere, Sam V; McDermott, Josh H
2018-05-02
A core goal of auditory neuroscience is to build quantitative models that predict cortical responses to natural sounds. Reasoning that a complete model of auditory cortex must solve ecologically relevant tasks, we optimized hierarchical neural networks for speech and music recognition. The best-performing network contained separate music and speech pathways following early shared processing, potentially replicating human cortical organization. The network performed both tasks as well as humans and exhibited human-like errors despite not being optimized to do so, suggesting common constraints on network and human performance. The network predicted fMRI voxel responses substantially better than traditional spectrotemporal filter models throughout auditory cortex. It also provided a quantitative signature of cortical representational hierarchy-primary and non-primary responses were best predicted by intermediate and late network layers, respectively. The results suggest that task optimization provides a powerful set of tools for modeling sensory systems. Copyright © 2018 Elsevier Inc. All rights reserved.
2010-03-15
Swiss cheese model of human error causation. ................................................................... 3 2. Results for the classification of...based on Reason’s “ Swiss cheese ” model of human error (1990). Figure 1 describes how an accident is likely to occur when all of the errors, or “holes...align. A detailed description of HFACS can be found in Wiegmann and Shappell (2003). Figure 1. The Swiss cheese model of human error
Collegiate Aviation Review. September 1994.
ERIC Educational Resources Information Center
Barker, Ballard M., Ed.
This document contains four papers on aviation education. The first paper, "Why Aren't We Teaching Aeronautical Decision Making?" (Richard J. Adams), reviews 15 years of aviation research into the causes of human performance errors in aviation and provides guidelines for designing the next generation of aeronautical decision-making materials.…
A Quality Improvement Project to Decrease Human Milk Errors in the NICU.
Oza-Frank, Reena; Kachoria, Rashmi; Dail, James; Green, Jasmine; Walls, Krista; McClead, Richard E
2017-02-01
Ensuring safe human milk in the NICU is a complex process with many potential points for error, of which one of the most serious is administration of the wrong milk to the wrong infant. Our objective was to describe a quality improvement initiative that was associated with a reduction in human milk administration errors identified over a 6-year period in a typical, large NICU setting. We employed a quasi-experimental time series quality improvement initiative by using tools from the model for improvement, Six Sigma methodology, and evidence-based interventions. Scanned errors were identified from the human milk barcode medication administration system. Scanned errors of interest were wrong-milk-to-wrong-infant, expired-milk, or preparation errors. The scanned error rate and the impact of additional improvement interventions from 2009 to 2015 were monitored by using statistical process control charts. From 2009 to 2015, the total number of errors scanned declined from 97.1 per 1000 bottles to 10.8. Specifically, the number of expired milk error scans declined from 84.0 per 1000 bottles to 8.9. The number of preparation errors (4.8 per 1000 bottles to 2.2) and wrong-milk-to-wrong-infant errors scanned (8.3 per 1000 bottles to 2.0) also declined. By reducing the number of errors scanned, the number of opportunities for errors also decreased. Interventions that likely had the greatest impact on reducing the number of scanned errors included installation of bedside (versus centralized) scanners and dedicated staff to handle milk. Copyright © 2017 by the American Academy of Pediatrics.
Space station crew safety: Human factors interaction model
NASA Technical Reports Server (NTRS)
Cohen, M. M.; Junge, M. K.
1985-01-01
A model of the various human factors issues and interactions that might affect crew safety is developed. The first step addressed systematically the central question: How is this space station different from all other spacecraft? A wide range of possible issue was identified and researched. Five major topics of human factors issues that interacted with crew safety resulted: Protocols, Critical Habitability, Work Related Issues, Crew Incapacitation and Personal Choice. Second, an interaction model was developed that would show some degree of cause and effect between objective environmental or operational conditions and the creation of potential safety hazards. The intermediary steps between these two extremes of causality were the effects on human performance and the results of degraded performance. The model contains three milestones: stressor, human performance (degraded) and safety hazard threshold. Between these milestones are two countermeasure intervention points. The first opportunity for intervention is the countermeasure against stress. If this countermeasure fails, performance degrades. The second opportunity for intervention is the countermeasure against error. If this second countermeasure fails, the threshold of a potential safety hazard may be crossed.
Wings: A New Paradigm in Human-Centered Design
NASA Technical Reports Server (NTRS)
Schutte, Paul C.
1997-01-01
Many aircraft accidents/incidents investigations cite crew error as a causal factor (Boeing Commercial Airplane Group 1996). Human factors experts suggest that crew error has many underlying causes and should be the start of an accident investigation and not the end. One of those causes, the flight deck design, is correctable. If a flight deck design does not accommodate the human's unique abilities and deficits, crew error may simply be the manifestation of this mismatch. Pilots repeatedly report that they are "behind the aircraft" , i.e., they do not know what the automated aircraft is doing or how the aircraft is doing it until after the fact. Billings (1991) promotes the concept of "human-centered automation"; calling on designers to allocate appropriate control and information to the human. However, there is much ambiguity regarding what it mean's to be human-centered. What often are labeled as "human-centered designs" are actually designs where a human factors expert has been involved in the design process or designs where tests have shown that humans can operate them. While such designs may be excellent, they do not represent designs that are systematically produced according to some set of prescribed methods and procedures. This paper describes a design concept, called Wings, that offers a clearer definition for human-centered design. This new design concept is radically different from current design processes in that the design begins with the human and uses the human body as a metaphor for designing the aircraft. This is not because the human is the most important part of the aircraft (certainly the aircraft would be useless without lift and thrust), but because he is the least understood, the least programmable, and one of the more critical elements. The Wings design concept has three properties: a reversal in the design process, from aerodynamics-, structures-, and propulsion-centered to truly human-centered; a design metaphor that guides function allocation and control and display design; and a deliberate distinction between two fundamental functions of design, to complement and to interpret human performance. The complementary function extends the human's capabilities beyond his or her current limitations - this includes sensing, computation, memory, physical force, and human decision making styles and skills. The interpretive (or hermeneutic, Hollnagel 1991) function translates information, functionality, and commands between the human and the aircraft. The Wings design concept allows the human to remain aware of the aircraft through natural interpretation. It also affords great improvements in system performance by maximizing the human's natural abilities and complementing the human's skills in a natural way. This paper will discuss the Wings design concept by describing the reversal in the traditional design process, the function allocation strategy of Wings, and the functions of complementing and interpreting the human.
Errors in Aviation Decision Making: Bad Decisions or Bad Luck?
NASA Technical Reports Server (NTRS)
Orasanu, Judith; Martin, Lynne; Davison, Jeannie; Null, Cynthia H. (Technical Monitor)
1998-01-01
Despite efforts to design systems and procedures to support 'correct' and safe operations in aviation, errors in human judgment still occur and contribute to accidents. In this paper we examine how an NDM (naturalistic decision making) approach might help us to understand the role of decision processes in negative outcomes. Our strategy was to examine a collection of identified decision errors through the lens of an aviation decision process model and to search for common patterns. The second, and more difficult, task was to determine what might account for those patterns. The corpus we analyzed consisted of tactical decision errors identified by the NTSB (National Transportation Safety Board) from a set of accidents in which crew behavior contributed to the accident. A common pattern emerged: about three quarters of the errors represented plan-continuation errors, that is, a decision to continue with the original plan despite cues that suggested changing the course of action. Features in the context that might contribute to these errors were identified: (a) ambiguous dynamic conditions and (b) organizational and socially-induced goal conflicts. We hypothesize that 'errors' are mediated by underestimation of risk and failure to analyze the potential consequences of continuing with the initial plan. Stressors may further contribute to these effects. Suggestions for improving performance in these error-inducing contexts are discussed.
Humans make efficient use of natural image statistics when performing spatial interpolation.
D'Antona, Anthony D; Perry, Jeffrey S; Geisler, Wilson S
2013-12-16
Visual systems learn through evolution and experience over the lifespan to exploit the statistical structure of natural images when performing visual tasks. Understanding which aspects of this statistical structure are incorporated into the human nervous system is a fundamental goal in vision science. To address this goal, we measured human ability to estimate the intensity of missing image pixels in natural images. Human estimation accuracy is compared with various simple heuristics (e.g., local mean) and with optimal observers that have nearly complete knowledge of the local statistical structure of natural images. Human estimates are more accurate than those of simple heuristics, and they match the performance of an optimal observer that knows the local statistical structure of relative intensities (contrasts). This optimal observer predicts the detailed pattern of human estimation errors and hence the results place strong constraints on the underlying neural mechanisms. However, humans do not reach the performance of an optimal observer that knows the local statistical structure of the absolute intensities, which reflect both local relative intensities and local mean intensity. As predicted from a statistical analysis of natural images, human estimation accuracy is negligibly improved by expanding the context from a local patch to the whole image. Our results demonstrate that the human visual system exploits efficiently the statistical structure of natural images.
Bultena, Sybrine; Danielmeier, Claudia; Bekkering, Harold; Lemhöfer, Kristin
2017-01-01
Humans monitor their behavior to optimize performance, which presumably relies on stable representations of correct responses. During second language (L2) learning, however, stable representations have yet to be formed while knowledge of the first language (L1) can interfere with learning, which in some cases results in persistent errors. In order to examine how correct L2 representations are stabilized, this study examined performance monitoring in the learning process of second language learners for a feature that conflicts with their first language. Using EEG, we investigated if L2 learners in a feedback-guided word gender assignment task showed signs of error detection in the form of an error-related negativity (ERN) before and after receiving feedback, and how feedback is processed. The results indicated that initially, response-locked negativities for correct (CRN) and incorrect (ERN) responses were of similar size, showing a lack of internal error detection when L2 representations are unstable. As behavioral performance improved following feedback, the ERN became larger than the CRN, pointing to the first signs of successful error detection. Additionally, we observed a second negativity following the ERN/CRN components, the amplitude of which followed a similar pattern as the previous negativities. Feedback-locked data indicated robust FRN and P300 effects in response to negative feedback across different rounds, demonstrating that feedback remained important in order to update memory representations during learning. We thus show that initially, L2 representations may often not be stable enough to warrant successful error monitoring, but can be stabilized through repeated feedback, which means that the brain is able to overcome L1 interference, and can learn to detect errors internally after a short training session. The results contribute a different perspective to the discussion on changes in ERN and FRN components in relation to learning, by extending the investigation of these effects to the language learning domain. Furthermore, these findings provide a further characterization of the online learning process of L2 learners.
Human errors and measurement uncertainty
NASA Astrophysics Data System (ADS)
Kuselman, Ilya; Pennecchi, Francesca
2015-04-01
Evaluating the residual risk of human errors in a measurement and testing laboratory, remaining after the error reduction by the laboratory quality system, and quantifying the consequences of this risk for the quality of the measurement/test results are discussed based on expert judgments and Monte Carlo simulations. A procedure for evaluation of the contribution of the residual risk to the measurement uncertainty budget is proposed. Examples are provided using earlier published sets of expert judgments on human errors in pH measurement of groundwater, elemental analysis of geological samples by inductively coupled plasma mass spectrometry, and multi-residue analysis of pesticides in fruits and vegetables. The human error contribution to the measurement uncertainty budget in the examples was not negligible, yet also not dominant. This was assessed as a good risk management result.
DOT National Transportation Integrated Search
2001-02-01
The Human Factors Analysis and Classification System (HFACS) is a general human error framework : originally developed and tested within the U.S. military as a tool for investigating and analyzing the human : causes of aviation accidents. Based upon ...
A Meta-Analysis Suggests Different Neural Correlates for Implicit and Explicit Learning.
Loonis, Roman F; Brincat, Scott L; Antzoulatos, Evan G; Miller, Earl K
2017-10-11
A meta-analysis of non-human primates performing three different tasks (Object-Match, Category-Match, and Category-Saccade associations) revealed signatures of explicit and implicit learning. Performance improved equally following correct and error trials in the Match (explicit) tasks, but it improved more after correct trials in the Saccade (implicit) task, a signature of explicit versus implicit learning. Likewise, error-related negativity, a marker for error processing, was greater in the Match (explicit) tasks. All tasks showed an increase in alpha/beta (10-30 Hz) synchrony after correct choices. However, only the implicit task showed an increase in theta (3-7 Hz) synchrony after correct choices that decreased with learning. In contrast, in the explicit tasks, alpha/beta synchrony increased with learning and decreased thereafter. Our results suggest that explicit versus implicit learning engages different neural mechanisms that rely on different patterns of oscillatory synchrony. Copyright © 2017 Elsevier Inc. All rights reserved.
Survey of Motion Tracking Methods Based on Inertial Sensors: A Focus on Upper Limb Human Motion
Filippeschi, Alessandro; Schmitz, Norbert; Miezal, Markus; Bleser, Gabriele; Ruffaldi, Emanuele; Stricker, Didier
2017-01-01
Motion tracking based on commercial inertial measurements units (IMUs) has been widely studied in the latter years as it is a cost-effective enabling technology for those applications in which motion tracking based on optical technologies is unsuitable. This measurement method has a high impact in human performance assessment and human-robot interaction. IMU motion tracking systems are indeed self-contained and wearable, allowing for long-lasting tracking of the user motion in situated environments. After a survey on IMU-based human tracking, five techniques for motion reconstruction were selected and compared to reconstruct a human arm motion. IMU based estimation was matched against motion tracking based on the Vicon marker-based motion tracking system considered as ground truth. Results show that all but one of the selected models perform similarly (about 35 mm average position estimation error). PMID:28587178
The problem of automation: Inappropriate feedback and interaction, not overautomation
NASA Technical Reports Server (NTRS)
Norman, Donald A.
1989-01-01
As automation increasingly takes its place in industry, especially high-risk industry, it is often blamed for causing harm and increasing the chance of human error when failures occur. It is proposed that the problem is not the presence of automation, but rather its inappropriate design. The problem is that the operations are performed appropriately under normal conditions, but there is inadequate feedback and interaction with the humans who must control the overall conduct of the task. When the situations exceed the capabilities of the automatic equipment, then the inadequate feedback leads to difficulties for the human controllers. The problem is that the automation is at an intermediate level of intelligence, powerful enough to take over control that which used to be done by people, but not powerful enough to handle all abnormalities. Moreover, its level of intelligence is insufficient to provide the continual, appropriate feedback that occurs naturally among human operators. To solve this problem, the automation should either be made less intelligent or more so, but the current level is quite inappropriate. The overall message is that it is possible to reduce error through appropriate design considerations.
Patient Safety in the Context of Neonatal Intensive Care: Research and Educational Opportunities
Raju, Tonse N. K.; Suresh, Gautham; Higgins, Rosemary D.
2012-01-01
Case reports and observational studies continue to report adverse events from medical errors. However, despite considerable attention to patient safety in the popular media, this topic is not a regular component of medical education, and much research needs to be carried out to understand the causes, consequences, and prevention of healthcare-related adverse events during neonatal intensive care. To address the knowledge gaps and to formulate a research and educational agenda in neonatology, the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD) invited a panel of experts to a workshop in August 2010. Patient safety issues discussed were: the reasons for errors, including systems design, working conditions, and worker fatigue; a need to develop a “culture” of patient safety; the role of electronic medical records, information technology, and simulators in reducing errors; error disclosure practices; medico-legal concerns; and educational needs. Specific neonatology-related topics discussed were: errors during resuscitation, mechanical ventilation, and performance of invasive procedures; medication errors including those associated with milk feedings; diagnostic errors; and misidentification of patients. This article provides an executive summary of the workshop. PMID:21386749
Mumma, Joel M; Durso, Francis T; Ferguson, Ashley N; Gipson, Christina L; Casanova, Lisa; Erukunuakpor, Kimberly; Kraft, Colleen S; Walsh, Victoria L; Zimring, Craig; DuBose, Jennifer; Jacob, Jesse T
2018-03-05
Doffing protocols for personal protective equipment (PPE) are critical for keeping healthcare workers (HCWs) safe during care of patients with Ebola virus disease. We assessed the relationship between errors and self-contamination during doffing. Eleven HCWs experienced with doffing Ebola-level PPE participated in simulations in which HCWs donned PPE marked with surrogate viruses (ɸ6 and MS2), completed a clinical task, and were assessed for contamination after doffing. Simulations were video recorded, and a failure modes and effects analysis and fault tree analyses were performed to identify errors during doffing, quantify their risk (risk index), and predict contamination data. Fifty-one types of errors were identified, many having the potential to spread contamination. Hand hygiene and removing the powered air purifying respirator (PAPR) hood had the highest total risk indexes (111 and 70, respectively) and number of types of errors (9 and 13, respectively). ɸ6 was detected on 10% of scrubs and the fault tree predicted a 10.4% contamination rate, likely occurring when the PAPR hood inadvertently contacted scrubs during removal. MS2 was detected on 10% of hands, 20% of scrubs, and 70% of inner gloves and the predicted rates were 7.3%, 19.4%, 73.4%, respectively. Fault trees for MS2 and ɸ6 contamination suggested similar pathways. Ebola-level PPE can both protect and put HCWs at risk for self-contamination throughout the doffing process, even among experienced HCWs doffing with a trained observer. Human factors methodologies can identify error-prone steps, delineate the relationship between errors and self-contamination, and suggest remediation strategies.
Increased instrument intelligence--can it reduce laboratory error?
Jekelis, Albert W
2005-01-01
Recent literature has focused on the reduction of laboratory errors and the potential impact on patient management. This study assessed the intelligent, automated preanalytical process-control abilities in newer generation analyzers as compared with older analyzers and the impact on error reduction. Three generations of immuno-chemistry analyzers were challenged with pooled human serum samples for a 3-week period. One of the three analyzers had an intelligent process of fluidics checks, including bubble detection. Bubbles can cause erroneous results due to incomplete sample aspiration. This variable was chosen because it is the most easily controlled sample defect that can be introduced. Traditionally, lab technicians have had to visually inspect each sample for the presence of bubbles. This is time consuming and introduces the possibility of human error. Instruments with bubble detection may be able to eliminate the human factor and reduce errors associated with the presence of bubbles. Specific samples were vortexed daily to introduce a visible quantity of bubbles, then immediately placed in the daily run. Errors were defined as a reported result greater than three standard deviations below the mean and associated with incomplete sample aspiration of the analyte of the individual analyzer Three standard deviations represented the target limits of proficiency testing. The results of the assays were examined for accuracy and precision. Efficiency, measured as process throughput, was also measured to associate a cost factor and potential impact of the error detection on the overall process. The analyzer performance stratified according to their level of internal process control The older analyzers without bubble detection reported 23 erred results. The newest analyzer with bubble detection reported one specimen incorrectly. The precision and accuracy of the nonvortexed specimens were excellent and acceptable for all three analyzers. No errors were found in the nonvortexed specimens. There were no significant differences in overall process time for any of the analyzers when tests were arranged in an optimal configuration. The analyzer with advanced fluidic intelligence demostrated the greatest ability to appropriately deal with an incomplete aspiration by not processing and reporting a result for the sample. This study suggests that preanalytical process-control capabilities could reduce errors. By association, it implies that similar intelligent process controls could favorably impact the error rate and, in the case of this instrument, do it without negatively impacting process throughput. Other improvements may be realized as a result of having an intelligent error-detection process including further reduction in misreported results, fewer repeats, less operator intervention, and less reagent waste.
Performance Support Tools for Space Medical Operations
NASA Technical Reports Server (NTRS)
Byrne, Vicky; Schmid, Josef; Barshi, Immanuel
2010-01-01
Early Constellation space missions are expected to have medical capabilities similar to those currently on board the Space Shuttle and International Space Station (ISS). Flight surgeons on the ground in Mission Control will direct the Crew Medical Officer (CMO) during medical situations. If the crew is unable to communicate with the ground, the CMO will carry out medical procedures without the aid of a flight surgeon. In these situations, use of performance support tools can reduce errors and time to perform emergency medical tasks. The research presented here is part of the Human Factors in Training Directed Research Project of the Space Human Factors Engineering Project under the Space Human Factors and Habitability Element of the Human Research Program. This is a joint project consisting of human factors teams from the Johnson Space Center (JSC) and the Ames Research Center (ARC). Work on medical training has been conducted in collaboration with the Medical Training Group at JSC and with Wyle that provides medical training to crew members, biomedical engineers (BMEs), and flight surgeons under the Bioastronautics contract. Human factors personnel at Johnson Space Center have investigated medical performance support tools for CMOs and flight surgeons.
Algorithmic Classification of Five Characteristic Types of Paraphasias.
Fergadiotis, Gerasimos; Gorman, Kyle; Bedrick, Steven
2016-12-01
This study was intended to evaluate a series of algorithms developed to perform automatic classification of paraphasic errors (formal, semantic, mixed, neologistic, and unrelated errors). We analyzed 7,111 paraphasias from the Moss Aphasia Psycholinguistics Project Database (Mirman et al., 2010) and evaluated the classification accuracy of 3 automated tools. First, we used frequency norms from the SUBTLEXus database (Brysbaert & New, 2009) to differentiate nonword errors and real-word productions. Then we implemented a phonological-similarity algorithm to identify phonologically related real-word errors. Last, we assessed the performance of a semantic-similarity criterion that was based on word2vec (Mikolov, Yih, & Zweig, 2013). Overall, the algorithmic classification replicated human scoring for the major categories of paraphasias studied with high accuracy. The tool that was based on the SUBTLEXus frequency norms was more than 97% accurate in making lexicality judgments. The phonological-similarity criterion was approximately 91% accurate, and the overall classification accuracy of the semantic classifier ranged from 86% to 90%. Overall, the results highlight the potential of tools from the field of natural language processing for the development of highly reliable, cost-effective diagnostic tools suitable for collecting high-quality measurement data for research and clinical purposes.
The 'problem' with automation - Inappropriate feedback and interaction, not 'over-automation'
NASA Technical Reports Server (NTRS)
Norman, D. A.
1990-01-01
Automation in high-risk industry is often blamed for causing harm and increasing the chance of human error when failures occur. It is proposed that the problem is not the presence of automation, but rather its inappropriate design. The problem is that the operations are performed appropriately under normal conditions, but there is inadequate feedback and interaction with the humans who must control the overall conduct of the task. The problem is that the automation is at an intermediate level of intelligence, powerful enough to take over control which used to be done by people, but not powerful enough to handle all abnormalities. Moreover, its level of intelligence is insufficient to provide the continual, appropriate feedback that occurs naturally among human operators. To solve this problem, the automation should either be made less intelligent or more so, but the current level is quite inappropriate. The overall message is that it is possible to reduce error through appropriate design considerations.
Position calibration of a 3-DOF hand-controller with hybrid structure
NASA Astrophysics Data System (ADS)
Zhu, Chengcheng; Song, Aiguo
2017-09-01
A hand-controller is a human-robot interactive device, which measures the 3-DOF (Degree of Freedom) position of the human hand and sends it as a command to control robot movement. The device also receives 3-DOF force feedback from the robot and applies it to the human hand. Thus, the precision of 3-DOF position measurements is a key performance factor for hand-controllers. However, when using a hybrid type 3-DOF hand controller, various errors occur and are considered originating from machining and assembly variations within the device. This paper presents a calibration method to improve the position tracking accuracy of hybrid type hand-controllers by determining the actual size of the hand-controller parts. By re-measuring and re-calibrating this kind of hand-controller, the actual size of the key parts that cause errors is determined. Modifying the formula parameters with the actual sizes, which are obtained in the calibrating process, improves the end position tracking accuracy of the device.
A method for automatic feature points extraction of human vertebrae three-dimensional model
NASA Astrophysics Data System (ADS)
Wu, Zhen; Wu, Junsheng
2017-05-01
A method for automatic extraction of the feature points of the human vertebrae three-dimensional model is presented. Firstly, the statistical model of vertebrae feature points is established based on the results of manual vertebrae feature points extraction. Then anatomical axial analysis of the vertebrae model is performed according to the physiological and morphological characteristics of the vertebrae. Using the axial information obtained from the analysis, a projection relationship between the statistical model and the vertebrae model to be extracted is established. According to the projection relationship, the statistical model is matched with the vertebrae model to get the estimated position of the feature point. Finally, by analyzing the curvature in the spherical neighborhood with the estimated position of feature points, the final position of the feature points is obtained. According to the benchmark result on multiple test models, the mean relative errors of feature point positions are less than 5.98%. At more than half of the positions, the error rate is less than 3% and the minimum mean relative error is 0.19%, which verifies the effectiveness of the method.
Compound Stimulus Presentation Does Not Deepen Extinction in Human Causal Learning
Griffiths, Oren; Holmes, Nathan; Westbrook, R. Fred
2017-01-01
Models of associative learning have proposed that cue-outcome learning critically depends on the degree of prediction error encountered during training. Two experiments examined the role of error-driven extinction learning in a human causal learning task. Target cues underwent extinction in the presence of additional cues, which differed in the degree to which they predicted the outcome, thereby manipulating outcome expectancy and, in the absence of any change in reinforcement, prediction error. These prediction error manipulations have each been shown to modulate extinction learning in aversive conditioning studies. While both manipulations resulted in increased prediction error during training, neither enhanced extinction in the present human learning task (one manipulation resulted in less extinction at test). The results are discussed with reference to the types of associations that are regulated by prediction error, the types of error terms involved in their regulation, and how these interact with parameters involved in training. PMID:28232809
Using a Hybrid Model to Forecast the Prevalence of Schistosomiasis in Humans
Zhou, Lingling; Xia, Jing; Yu, Lijing; Wang, Ying; Shi, Yun; Cai, Shunxiang; Nie, Shaofa
2016-01-01
Background: We previously proposed a hybrid model combining both the autoregressive integrated moving average (ARIMA) and the nonlinear autoregressive neural network (NARNN) models in forecasting schistosomiasis. Our purpose in the current study was to forecast the annual prevalence of human schistosomiasis in Yangxin County, using our ARIMA-NARNN model, thereby further certifying the reliability of our hybrid model. Methods: We used the ARIMA, NARNN and ARIMA-NARNN models to fit and forecast the annual prevalence of schistosomiasis. The modeling time range included was the annual prevalence from 1956 to 2008 while the testing time range included was from 2009 to 2012. The mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) were used to measure the model performance. We reconstructed the hybrid model to forecast the annual prevalence from 2013 to 2016. Results: The modeling and testing errors generated by the ARIMA-NARNN model were lower than those obtained from either the single ARIMA or NARNN models. The predicted annual prevalence from 2013 to 2016 demonstrated an initial decreasing trend, followed by an increase. Conclusions: The ARIMA-NARNN model can be well applied to analyze surveillance data for early warning systems for the control and elimination of schistosomiasis. PMID:27023573
A neural network for noise correlation classification
NASA Astrophysics Data System (ADS)
Paitz, Patrick; Gokhberg, Alexey; Fichtner, Andreas
2018-02-01
We present an artificial neural network (ANN) for the classification of ambient seismic noise correlations into two categories, suitable and unsuitable for noise tomography. By using only a small manually classified data subset for network training, the ANN allows us to classify large data volumes with low human effort and to encode the valuable subjective experience of data analysts that cannot be captured by a deterministic algorithm. Based on a new feature extraction procedure that exploits the wavelet-like nature of seismic time-series, we efficiently reduce the dimensionality of noise correlation data, still keeping relevant features needed for automated classification. Using global- and regional-scale data sets, we show that classification errors of 20 per cent or less can be achieved when the network training is performed with as little as 3.5 per cent and 16 per cent of the data sets, respectively. Furthermore, the ANN trained on the regional data can be applied to the global data, and vice versa, without a significant increase of the classification error. An experiment where four students manually classified the data, revealed that the classification error they would assign to each other is substantially larger than the classification error of the ANN (>35 per cent). This indicates that reproducibility would be hampered more by human subjectivity than by imperfections of the ANN.
Enhanced Oceanic Operations Human-In-The-Loop In-Trail Procedure Validation Simulation Study
NASA Technical Reports Server (NTRS)
Murdoch, Jennifer L.; Bussink, Frank J. L.; Chamberlain, James P.; Chartrand, Ryan C.; Palmer, Michael T.; Palmer, Susan O.
2008-01-01
The Enhanced Oceanic Operations Human-In-The-Loop In-Trail Procedure (ITP) Validation Simulation Study investigated the viability of an ITP designed to enable oceanic flight level changes that would not otherwise be possible. Twelve commercial airline pilots with current oceanic experience flew a series of simulated scenarios involving either standard or ITP flight level change maneuvers and provided subjective workload ratings, assessments of ITP validity and acceptability, and objective performance measures associated with the appropriate selection, request, and execution of ITP flight level change maneuvers. In the majority of scenarios, subject pilots correctly assessed the traffic situation, selected an appropriate response (i.e., either a standard flight level change request, an ITP request, or no request), and executed their selected flight level change procedure, if any, without error. Workload ratings for ITP maneuvers were acceptable and not substantially higher than for standard flight level change maneuvers, and, for the majority of scenarios and subject pilots, subjective acceptability ratings and comments for ITP were generally high and positive. Qualitatively, the ITP was found to be valid and acceptable. However, the error rates for ITP maneuvers were higher than for standard flight level changes, and these errors may have design implications for both the ITP and the study's prototype traffic display. These errors and their implications are discussed.
Mortensen, Jonathan M; Telis, Natalie; Hughey, Jacob J; Fan-Minogue, Hua; Van Auken, Kimberly; Dumontier, Michel; Musen, Mark A
2016-04-01
Biomedical ontologies contain errors. Crowdsourcing, defined as taking a job traditionally performed by a designated agent and outsourcing it to an undefined large group of people, provides scalable access to humans. Therefore, the crowd has the potential to overcome the limited accuracy and scalability found in current ontology quality assurance approaches. Crowd-based methods have identified errors in SNOMED CT, a large, clinical ontology, with an accuracy similar to that of experts, suggesting that crowdsourcing is indeed a feasible approach for identifying ontology errors. This work uses that same crowd-based methodology, as well as a panel of experts, to verify a subset of the Gene Ontology (200 relationships). Experts identified 16 errors, generally in relationships referencing acids and metals. The crowd performed poorly in identifying those errors, with an area under the receiver operating characteristic curve ranging from 0.44 to 0.73, depending on the methods configuration. However, when the crowd verified what experts considered to be easy relationships with useful definitions, they performed reasonably well. Notably, there are significantly fewer Google search results for Gene Ontology concepts than SNOMED CT concepts. This disparity may account for the difference in performance - fewer search results indicate a more difficult task for the worker. The number of Internet search results could serve as a method to assess which tasks are appropriate for the crowd. These results suggest that the crowd fits better as an expert assistant, helping experts with their verification by completing the easy tasks and allowing experts to focus on the difficult tasks, rather than an expert replacement. Copyright © 2016 Elsevier Inc. All rights reserved.
Kim, Jun Sik; Jeong, Byung Yong
2018-05-03
The study aimed to describe the characteristics of occupational injuries of female workers in the residential healthcare facilities for the elderly, and analyze human errors as causes of accidents. From the national industrial accident compensation data, 506 female injuries were analyzed by age and occupation. The results showed that medical service worker was the most prevalent (54.1%), followed by social welfare worker (20.4%). Among injuries, 55.7% were <1 year of work experience, and 37.9% were ≥60 years old. Slips/falls were the most common type of accident (42.7%), and proportion of injured by slips/falls increases with age. Among human errors, action errors were the primary reasons, followed by perception errors, and cognition errors. Besides, the ratios of injuries by perception errors and action errors increase with age, respectively. The findings of this study suggest that there is a need to design workplaces that accommodate the characteristics of older female workers.
Context-dependent sequential effects of target selection for action.
Moher, Jeff; Song, Joo-Hyun
2013-07-11
Humans exhibit variation in behavior from moment to moment even when performing a simple, repetitive task. Errors are typically followed by cautious responses, minimizing subsequent distractor interference. However, less is known about how variation in the execution of an ultimately correct response affects subsequent behavior. We asked participants to reach toward a uniquely colored target presented among distractors and created two categories to describe participants' responses in correct trials based on analyses of movement trajectories; partial errors referred to trials in which observers initially selected a nontarget for action before redirecting the movement and accurately pointing to the target, and direct movements referred to trials in which the target was directly selected for action. We found that latency to initiate a hand movement was shorter in trials following partial errors compared to trials following direct movements. Furthermore, when the target and distractor colors were repeated, movement time and reach movement curvature toward distractors were greater following partial errors compared to direct movements. Finally, when the colors were repeated, partial errors were more frequent than direct movements following partial-error trials, and direct movements were more frequent following direct-movement trials. The dependence of these latter effects on repeated-task context indicates the involvement of higher-level cognitive mechanisms in an integrated attention-action system in which execution of a partial-error or direct-movement response affects memory representations that bias performance in subsequent trials. Altogether, these results demonstrate that whether a nontarget is selected for action or not has a measurable impact on subsequent behavior.
Some practical problems in implementing randomization.
Downs, Matt; Tucker, Kathryn; Christ-Schmidt, Heidi; Wittes, Janet
2010-06-01
While often theoretically simple, implementing randomization to treatment in a masked, but confirmable, fashion can prove difficult in practice. At least three categories of problems occur in randomization: (1) bad judgment in the choice of method, (2) design and programming errors in implementing the method, and (3) human error during the conduct of the trial. This article focuses on these latter two types of errors, dealing operationally with what can go wrong after trial designers have selected the allocation method. We offer several case studies and corresponding recommendations for lessening the frequency of problems in allocating treatment or for mitigating the consequences of errors. Recommendations include: (1) reviewing the randomization schedule before starting a trial, (2) being especially cautious of systems that use on-demand random number generators, (3) drafting unambiguous randomization specifications, (4) performing thorough testing before entering a randomization system into production, (5) maintaining a dataset that captures the values investigators used to randomize participants, thereby allowing the process of treatment allocation to be reproduced and verified, (6) resisting the urge to correct errors that occur in individual treatment assignments, (7) preventing inadvertent unmasking to treatment assignments in kit allocations, and (8) checking a sample of study drug kits to allow detection of errors in drug packaging and labeling. Although we performed a literature search of documented randomization errors, the examples that we provide and the resultant recommendations are based largely on our own experience in industry-sponsored clinical trials. We do not know how representative our experience is or how common errors of the type we have seen occur. Our experience underscores the importance of verifying the integrity of the treatment allocation process before and during a trial. Clinical Trials 2010; 7: 235-245. http://ctj.sagepub.com.
Memory Errors Reveal a Bias to Spontaneously Generalize to Categories
ERIC Educational Resources Information Center
Sutherland, Shelbie L.; Cimpian, Andrei; Leslie, Sarah-Jane; Gelman, Susan A.
2015-01-01
Much evidence suggests that, from a young age, humans are able to generalize information learned about a subset of a category to the category itself. Here, we propose that--beyond simply being able to perform such generalizations--people are "biased" to generalize to categories, such that they routinely make spontaneous, implicit…
ERIC Educational Resources Information Center
Swanson, Kristen; Allen, Gayle; Mancabelli, Rob
2015-01-01
Even mentioning data analysis puts many educators on edge; they fear that in data discussions, their performance will be judged. And, the authors note, it's a human trait to look for the source of a problem in the behavior of people involved rather than the system surrounding those people--what some call the Fundamental Attribution Error. When…
Marsden, Karen E; Ma, Wei Ji; Deci, Edward L; Ryan, Richard M; Chiu, Pearl H
2015-06-01
The duration and quality of human performance depend on both intrinsic motivation and external incentives. However, little is known about the neuroscientific basis of this interplay between internal and external motivators. Here, we used functional magnetic resonance imaging to examine the neural substrates of intrinsic motivation, operationalized as the free-choice time spent on a task when this was not required, and tested the neural and behavioral effects of external reward on intrinsic motivation. We found that increased duration of free-choice time was predicted by generally diminished neural responses in regions associated with cognitive and affective regulation. By comparison, the possibility of additional reward improved task accuracy, and specifically increased neural and behavioral responses following errors. Those individuals with the smallest neural responses associated with intrinsic motivation exhibited the greatest error-related neural enhancement under the external contingency of possible reward. Together, these data suggest that human performance is guided by a "tonic" and "phasic" relationship between the neural substrates of intrinsic motivation (tonic) and the impact of external incentives (phasic).
[Risk and risk management in aviation].
Müller, Manfred
2004-10-01
RISK MANAGEMENT: The large proportion of human errors in aviation accidents suggested the solution--at first sight brilliant--to replace the fallible human being by an "infallible" digitally-operating computer. However, even after the introduction of the so-called HITEC-airplanes, the factor human error still accounts for 75% of all accidents. Thus, if the computer is ruled out as the ultimate safety system, how else can complex operations involving quick and difficult decisions be controlled? OPTIMIZED TEAM INTERACTION/PARALLEL CONNECTION OF THOUGHT MACHINES: Since a single person is always "highly error-prone", support and control have to be guaranteed by a second person. The independent work of mind results in a safety network that more efficiently cushions human errors. NON-PUNITIVE ERROR MANAGEMENT: To be able to tackle the actual problems, the open discussion of intervened errors must not be endangered by the threat of punishment. It has been shown in the past that progress is primarily achieved by investigating and following up mistakes, failures and catastrophes shortly after they happened. HUMAN FACTOR RESEARCH PROJECT: A comprehensive survey showed the following result: By far the most frequent safety-critical situation (37.8% of all events) consists of the following combination of risk factors: 1. A complication develops. 2. In this situation of increased stress a human error occurs. 3. The negative effects of the error cannot be corrected or eased because there are deficiencies in team interaction on the flight deck. This means, for example, that a negative social climate has the effect of a "turbocharger" when a human error occurs. It needs to be pointed out that a negative social climate is not identical with a dispute. In many cases the working climate is burdened without the responsible person even noticing it: A first negative impression, too much or too little respect, contempt, misunderstandings, not expressing unclear concern, etc. can considerably reduce the efficiency of a team.
Assessing the Performance of Human-Automation Collaborative Planning Systems
2011-06-01
process- ing and incorporating vast amounts of incoming information into their solutions. How- ever, these algorithms are brittle and unable to account for...planning system, a descriptive Mission Performance measure may address the total travel time on the path or the cost of the path (e.g. total work...minimizing costs or collisions [4, 32, 33]. Error measures for such a path planning system may track how many collisions occur or how much threat
NASA Technical Reports Server (NTRS)
Bienert, Nancy; Mercer, Joey; Homola, Jeffrey; Morey, Susan; Prevot, Thomas
2014-01-01
This paper presents a case study of how factors such as wind prediction errors and metering delays can influence controller performance and workload in Human-In-The-Loop simulations. Retired air traffic controllers worked two arrival sectors adjacent to the terminal area. The main tasks were to provide safe air traffic operations and deliver the aircraft to the metering fix within +/- 25 seconds of the scheduled arrival time with the help of provided decision support tools. Analyses explore the potential impact of metering delays and system uncertainties on controller workload and performance. The results suggest that trajectory prediction uncertainties impact safety performance, while metering fix accuracy and workload appear subject to the scenario difficulty.
FRAMEWORK AND APPLICATION FOR MODELING CONTROL ROOM CREW PERFORMANCE AT NUCLEAR POWER PLANTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ronald L Boring; David I Gertman; Tuan Q Tran
2008-09-01
This paper summarizes an emerging project regarding the utilization of high-fidelity MIDAS simulations for visualizing and modeling control room crew performance at nuclear power plants. The key envisioned uses for MIDAS-based control room simulations are: (i) the estimation of human error associated with advanced control room equipment and configurations, (ii) the investigative determination of contributory cognitive factors for risk significant scenarios involving control room operating crews, and (iii) the certification of reduced staffing levels in advanced control rooms. It is proposed that MIDAS serves as a key component for the effective modeling of cognition, elements of situation awareness, and riskmore » associated with human performance in next generation control rooms.« less
Rong, Hao; Tian, Jin; Zhao, Tingdi
2016-01-01
In traditional approaches of human reliability assessment (HRA), the definition of the error producing conditions (EPCs) and the supporting guidance are such that some of the conditions (especially organizational or managerial conditions) can hardly be included, and thus the analysis is burdened with incomprehensiveness without reflecting the temporal trend of human reliability. A method based on system dynamics (SD), which highlights interrelationships among technical and organizational aspects that may contribute to human errors, is presented to facilitate quantitatively estimating the human error probability (HEP) and its related variables changing over time in a long period. Taking the Minuteman III missile accident in 2008 as a case, the proposed HRA method is applied to assess HEP during missile operations over 50 years by analyzing the interactions among the variables involved in human-related risks; also the critical factors are determined in terms of impact that the variables have on risks in different time periods. It is indicated that both technical and organizational aspects should be focused on to minimize human errors in a long run. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Giuliani, Manuel; Mirnig, Nicole; Stollnberger, Gerald; Stadler, Susanne; Buchner, Roland; Tscheligi, Manfred
2015-01-01
Human-robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human-robot interaction experiments. For that, we analyzed 201 videos of five human-robot interaction user studies with varying tasks from four independent projects. The analysis shows that there are two types of error situations: social norm violations and technical failures. Social norm violations are situations in which the robot does not adhere to the underlying social script of the interaction. Technical failures are caused by technical shortcomings of the robot. The results of the video analysis show that the study participants use many head movements and very few gestures, but they often smile, when in an error situation with the robot. Another result is that the participants sometimes stop moving at the beginning of error situations. We also found that the participants talked more in the case of social norm violations and less during technical failures. Finally, the participants use fewer non-verbal social signals (for example smiling, nodding, and head shaking), when they are interacting with the robot alone and no experimenter or other human is present. The results suggest that participants do not see the robot as a social interaction partner with comparable communication skills. Our findings have implications for builders and evaluators of human-robot interaction systems. The builders need to consider including modules for recognition and classification of head movements to the robot input channels. The evaluators need to make sure that the presence of an experimenter does not skew the results of their user studies.
Design Guidance for Computer-Based Procedures for Field Workers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oxstrand, Johanna; Le Blanc, Katya; Bly, Aaron
Nearly all activities that involve human interaction with nuclear power plant systems are guided by procedures, instructions, or checklists. Paper-based procedures (PBPs) currently used by most utilities have a demonstrated history of ensuring safety; however, improving procedure use could yield significant savings in increased efficiency, as well as improved safety through human performance gains. The nuclear industry is constantly trying to find ways to decrease human error rates, especially human error rates associated with procedure use. As a step toward the goal of improving field workers’ procedure use and adherence and hence improve human performance and overall system reliability, themore » U.S. Department of Energy Light Water Reactor Sustainability (LWRS) Program researchers, together with the nuclear industry, have been investigating the possibility and feasibility of replacing current paper-based procedures with computer-based procedures (CBPs). PBPs have ensured safe operation of plants for decades, but limitations in paper-based systems do not allow them to reach the full potential for procedures to prevent human errors. The environment in a nuclear power plant is constantly changing, depending on current plant status and operating mode. PBPs, which are static by nature, are being applied to a constantly changing context. This constraint often results in PBPs that are written in a manner that is intended to cover many potential operating scenarios. Hence, the procedure layout forces the operator to search through a large amount of irrelevant information to locate the pieces of information relevant for the task and situation at hand, which has potential consequences of taking up valuable time when operators must be responding to the situation, and potentially leading operators down an incorrect response path. Other challenges related to use of PBPs are management of multiple procedures, place-keeping, finding the correct procedure for a task, and relying on other sources of additional information to ensure a functional and accurate understanding of the current plant status (Converse, 1995; Fink, Killian, Hanes, and Naser, 2009; Le Blanc, Oxstrand, and Waicosky, 2012). This report provides design guidance to be used when designing the human-system interaction and the design of the graphical user interface for a CBP system. The guidance is based on human factors research related to the design and usability of CBPs conducted by Idaho National Laboratory, 2012 - 2016.« less
Investigation of automated task learning, decomposition and scheduling
NASA Technical Reports Server (NTRS)
Livingston, David L.; Serpen, Gursel; Masti, Chandrashekar L.
1990-01-01
The details and results of research conducted in the application of neural networks to task planning and decomposition are presented. Task planning and decomposition are operations that humans perform in a reasonably efficient manner. Without the use of good heuristics and usually much human interaction, automatic planners and decomposers generally do not perform well due to the intractable nature of the problems under consideration. The human-like performance of neural networks has shown promise for generating acceptable solutions to intractable problems such as planning and decomposition. This was the primary reasoning behind attempting the study. The basis for the work is the use of state machines to model tasks. State machine models provide a useful means for examining the structure of tasks since many formal techniques have been developed for their analysis and synthesis. It is the approach to integrate the strong algebraic foundations of state machines with the heretofore trial-and-error approach to neural network synthesis.
On operator strategic behavior
NASA Technical Reports Server (NTRS)
Hancock, P. A.
1991-01-01
Deeper and more detailed knowledge as to how human operators such as pilots respond, singly and in groups, to demands on their performance which arise from technical systems will support the manipulation of such systems' design in order to accommodate the foibles of human behavior. Efforts to understand how self-autonomy impacts strategic behavior and such related issues as error generation/recognition/correction are still in their infancy. The present treatment offers both general and aviation-specific definitions of strategic behavior as precursors of prospective investigations.
[Innovative training for enhancing patient safety. Safety culture and integrated concepts].
Rall, M; Schaedle, B; Zieger, J; Naef, W; Weinlich, M
2002-11-01
Patient safety is determined by the performance safety of the medical team. Errors in medicine are amongst the leading causes of death of hospitalized patients. These numbers call for action. Backgrounds, methods and new forms of training are introduced in this article. Concepts from safety research are transformed to the field of emergency medical treatment. Strategies from realistic patient simulator training sessions and innovative training concepts are discussed. The reasons for the high numbers of errors in medicine are not due to a lack of medical knowledge, but due to human factors and organisational circumstances. A first step towards an improved patient safety is to accept this. We always need to be prepared that errors will occur. A next step would be to separate "error" from guilt (culture of blame) allowing for a real analysis of accidents and establishment of meaningful incident reporting systems. Concepts with a good success record from aviation like "crew resource management" (CRM) training have been adapted my medicine and are ready to use. These concepts require theoretical education as well as practical training. Innovative team training sessions using realistic patient simulator systems with video taping (for self reflexion) and interactive debriefing following the sessions are very promising. As the need to reduce error rates in medicine is very high and the reasons, methods and training concepts are known, we are urged to implement these new training concepts widely and consequently. To err is human - not to counteract it is not.
High definition video teaching module for learning neck dissection.
Mendez, Adrian; Seikaly, Hadi; Ansari, Kal; Murphy, Russell; Cote, David
2014-03-25
Video teaching modules are proven effective tools for enhancing student competencies and technical skills in the operating room. Integration into post-graduate surgical curricula, however, continues to pose a challenge in modern surgical education. To date, video teaching modules for neck dissection have yet to be described in the literature. To develop and validate an HD video-based teaching module (HDVM) to help instruct post-graduate otolaryngology trainees in performing neck dissection. This prospective study included 6 intermediate to senior otolaryngology residents. All consented subjects first performed a control selective neck dissection. Subjects were then exposed to the video teaching module. Following a washout period, a repeat procedure was performed. Recordings of the both sets of neck dissections were de-identified and reviewed by an independent evaluator and scored using the Observational Clinical Human Reliability Assessment (OCHRA) system. In total 91 surgical errors were made prior to the HDVM and 41 after exposure, representing a 55% decrease in error occurrence. The two groups were found to be significantly different. Similarly, 66 and 24 staff takeover events occurred pre and post HDVM exposure, respectively, representing a statistically significant 64% decrease. HDVM is a useful adjunct to classical surgical training. Residents performed significantly less errors following exposure to the HD-video module. Similarly, significantly less staff takeover events occurred following exposure to the HDVM.
Wiegmann, D A; Shappell, S A
2001-11-01
The Human Factors Analysis and Classification System (HFACS) is a general human error framework originally developed and tested within the U.S. military as a tool for investigating and analyzing the human causes of aviation accidents. Based on Reason's (1990) model of latent and active failures, HFACS addresses human error at all levels of the system, including the condition of aircrew and organizational factors. The purpose of the present study was to assess the utility of the HFACS framework as an error analysis and classification tool outside the military. The HFACS framework was used to analyze human error data associated with aircrew-related commercial aviation accidents that occurred between January 1990 and December 1996 using database records maintained by the NTSB and the FAA. Investigators were able to reliably accommodate all the human causal factors associated with the commercial aviation accidents examined in this study using the HFACS system. In addition, the classification of data using HFACS highlighted several critical safety issues in need of intervention research. These results demonstrate that the HFACS framework can be a viable tool for use within the civil aviation arena. However, additional research is needed to examine its applicability to areas outside the flight deck, such as aircraft maintenance and air traffic control domains.
To Err Is Human; To Structurally Prime from Errors Is Also Human
ERIC Educational Resources Information Center
Slevc, L. Robert; Ferreira, Victor S.
2013-01-01
Natural language contains disfluencies and errors. Do listeners simply discard information that was clearly produced in error, or can erroneous material persist to affect subsequent processing? Two experiments explored this question using a structural priming paradigm. Speakers described dative-eliciting pictures after hearing prime sentences that…
NASA Astrophysics Data System (ADS)
Psikuta, Agnes; Mert, Emel; Annaheim, Simon; Rossi, René M.
2018-02-01
To evaluate the quality of new energy-saving and performance-supporting building and urban settings, the thermal sensation and comfort models are often used. The accuracy of these models is related to accurate prediction of the human thermo-physiological response that, in turn, is highly sensitive to the local effect of clothing. This study aimed at the development of an empirical regression model of the air gap thickness and the contact area in clothing to accurately simulate human thermal and perceptual response. The statistical model predicted reliably both parameters for 14 body regions based on the clothing ease allowances. The effect of the standard error in air gap prediction on the thermo-physiological response was lower than the differences between healthy humans. It was demonstrated that currently used assumptions and methods for determination of the air gap thickness can produce a substantial error for all global, mean, and local physiological parameters, and hence, lead to false estimation of the resultant physiological state of the human body, thermal sensation, and comfort. Thus, this model may help researchers to strive for improvement of human thermal comfort, health, productivity, safety, and overall sense of well-being with simultaneous reduction of energy consumption and costs in built environment.
Human factors analysis and classification system-HFACS.
DOT National Transportation Integrated Search
2000-02-01
Human error has been implicated in 70 to 80% of all civil and military aviation accidents. Yet, most accident : reporting systems are not designed around any theoretical framework of human error. As a result, most : accident databases are not conduci...
Robustness of linkage strategy that leads to large-scale cooperation.
Inaba, Misato; Takahashi, Nobuyuki; Ohtsuki, Hisashi
2016-11-21
One of the most well-known models to characterize cooperation among unrelated individuals is Social dilemma (SD). However there is no consensus about how to solve the SD by itself. Since SDs are often embedded in other social interactions, including indirect reciprocity games (IR), human can coordinate their behaviors across multiple games. Such coordination is called 'linkage'. Recently linkage has been considered as a promising solution to resolve SDs, since excluding SD defectors (i.e. those who defected in SD) from indirectly reciprocal relationships functions as a costless sanction. A previous study performed mathematical modeling and revealed that a linkage strategy, which cooperates in SD and engages in the Standing strategy in IR based on the recipients' behaviors in both SD and IR, was an ESS against a non-linkage strategy which defects in SD and engages in the Standing strategy in IR based on recipients' behaviors only in IR (Panchanathan and Boyd, 2004). In order to investigate the robustness of the linkage strategy, we devised a non-linkage strategy, which cooperates in SD but does not link two games. First, we conducted a mathematical analysis and demonstrated that the linkage strategy was not an ESS against cooperating non-linkage strategy. Second, we conducted a series of agent-based computer simulations to examine how the strategies perform in situations in which various types of errors can occur. Our results showed that the linkage strategy was an ESS only when there are implementation errors in SD. However, the equilibrium of the linkage strategy was unstable when there are perception errors. Since we know that humans are not free from perception errors in their social life, future studies will need to show how perception errors can be overcome in order to provide support for the conclusion that linkage is a plausible solution to SDs. Copyright © 2016 Elsevier Ltd. All rights reserved.
Culture Representation in Human Reliability Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
David Gertman; Julie Marble; Steven Novack
Understanding human-system response is critical to being able to plan and predict mission success in the modern battlespace. Commonly, human reliability analysis has been used to predict failures of human performance in complex, critical systems. However, most human reliability methods fail to take culture into account. This paper takes an easily understood state of the art human reliability analysis method and extends that method to account for the influence of culture, including acceptance of new technology, upon performance. The cultural parameters used to modify the human reliability analysis were determined from two standard industry approaches to cultural assessment: Hofstede’s (1991)more » cultural factors and Davis’ (1989) technology acceptance model (TAM). The result is called the Culture Adjustment Method (CAM). An example is presented that (1) reviews human reliability assessment with and without cultural attributes for a Supervisory Control and Data Acquisition (SCADA) system attack, (2) demonstrates how country specific information can be used to increase the realism of HRA modeling, and (3) discusses the differences in human error probability estimates arising from cultural differences.« less
A classification on human factor accident/incident of China civil aviation in recent twelve years.
Luo, Xiao-li
2004-10-01
To study human factor accident/incident occurred during 1990-2001 using new classification standard. The human factor accident/incident classification standard is developed on the basis of Reason's Model, combining with CAAC's traditional classifying method, and applied to the classified statistical analysis for 361 flying incidents and 35 flight accidents of China civil aviation, which is induced by human factors and occurred from 1990 to 2001. 1) the incident percentage of taxi and cruise is higher than that of takeoff, climb and descent. 2) The dominating type of flight incidents is diverging of runway, overrunning, near-miss, tail/wingtip/engine strike and ground obstacle impacting. 3) The top three accidents are out of control caused by crew, mountain collision and over runway. 4) Crew's basic operating skill is lower than what we imagined, the mostly representation is poor correcting ability when flight error happened. 5) Crew errors can be represented by incorrect control, regulation and procedure violation, disorientation and diverging percentage of correct flight level. The poor CRM skill is the dominant factor impacting China civil aviation safety, this result has a coincidence with previous study, but there is much difference and distinct characteristic in top incident phase, the type of crew error and behavior performance compared with that of advanced countries. We should strengthen CRM training for all of pilots aiming at the Chinese pilot behavior characteristic in order to improve the safety level of China civil aviation.
Paediatric in-patient prescribing errors in Malaysia: a cross-sectional multicentre study.
Khoo, Teik Beng; Tan, Jing Wen; Ng, Hoong Phak; Choo, Chong Ming; Bt Abdul Shukor, Intan Nor Chahaya; Teh, Siao Hean
2017-06-01
Background There is a lack of large comprehensive studies in developing countries on paediatric in-patient prescribing errors in different settings. Objectives To determine the characteristics of in-patient prescribing errors among paediatric patients. Setting General paediatric wards, neonatal intensive care units and paediatric intensive care units in government hospitals in Malaysia. Methods This is a cross-sectional multicentre study involving 17 participating hospitals. Drug charts were reviewed in each ward to identify the prescribing errors. All prescribing errors identified were further assessed for their potential clinical consequences, likely causes and contributing factors. Main outcome measures Incidence, types, potential clinical consequences, causes and contributing factors of the prescribing errors. Results The overall prescribing error rate was 9.2% out of 17,889 prescribed medications. There was no significant difference in the prescribing error rates between different types of hospitals or wards. The use of electronic prescribing had a higher prescribing error rate than manual prescribing (16.9 vs 8.2%, p < 0.05). Twenty eight (1.7%) prescribing errors were deemed to have serious potential clinical consequences and 2 (0.1%) were judged to be potentially fatal. Most of the errors were attributed to human factors, i.e. performance or knowledge deficit. The most common contributing factors were due to lack of supervision or of knowledge. Conclusions Although electronic prescribing may potentially improve safety, it may conversely cause prescribing errors due to suboptimal interfaces and cumbersome work processes. Junior doctors need specific training in paediatric prescribing and close supervision to reduce prescribing errors in paediatric in-patients.
MacDonald, Chad; Moussavi, Zahra; Sarkodie-Gyan, Thompson
2007-01-01
This paper presents the development and simulation of a fuzzy logic based learning mechanism to emulate human motor learning. In particular, fuzzy inference was used to develop an internal model of a novel dynamic environment experienced during planar reaching movements with the upper limb. A dynamic model of the human arm was developed and a fuzzy if-then rule base was created to relate trajectory movement and velocity errors to internal model update parameters. An experimental simulation was performed to compare the fuzzy system's performance with that of human subjects. It was found that the dynamic model behaved as expected, and the fuzzy learning mechanism created an internal model that was capable of opposing the environmental force field to regain a trajectory closely resembling the desired ideal.
Temporal expectation in focal hand dystonia.
Avanzino, Laura; Martino, Davide; Martino, Isadora; Pelosin, Elisa; Vicario, Carmelo M; Bove, Marco; Defazio, Gianni; Abbruzzese, Giovanni
2013-02-01
Patients with writer's cramp present sensory and representational abnormalities relevant to motor control, such as impairment in the temporal discrimination between tactile stimuli and in pure motor imagery tasks, like the mental rotation of corporeal and inanimate objects. However, only limited information is available on the ability of patients with dystonia to process the time-dependent features (e.g. speed) of movement in real time. The processing of time-dependent features of movement has a crucial role in predicting whether the outcome of a complex motor sequence, such as handwriting or playing a musical passage, will be consistent with its ultimate goal, or results instead in an execution error. In this study, we sought to evaluate the implicit ability to perceive the temporal outcome of different movements in a group of patients with writer's cramp. Fourteen patients affected by writer's cramp in the right hand and 17 age- and gender-matched healthy subjects were recruited for the study. Subjects were asked to perform a temporal expectation task by predicting the end of visually perceived human body motion (handwriting, i.e. the action performed by the human body segment specifically affected by writer's cramp) or inanimate object motion (a moving circle reaching a spatial target). Videos representing movements were shown in full before experimental trials; the actual tasks consisted of watching the same videos, but interrupted after a variable interval ('pre-dark') from its onset by a dark interval of variable duration. During the 'dark' interval, subjects were asked to indicate when the movement represented in the video reached its end by clicking on the space bar of the keyboard. We also included a visual working memory task. Performance on the timing task was analysed measuring the absolute value of timing error, the coefficient of variability and the percentage of anticipation responses. Patients with writer's cramp exhibited greater absolute timing error compared with control subjects in the human body motion task (whereas no difference was observed in the inanimate object motion task). No effect of group was documented on the visual working memory tasks. Absolute timing error on the human body motion task did not significantly correlate with symptom severity, disease duration or writing speed. Our findings suggest an alteration of the writing movement representation at a central level and are consistent with the view that dystonia is not a purely motor disorder, but it also involves non-motor (sensory, cognitive) aspects related to movement processing and planning.
Hicks, Christopher M; Bandiera, Glen W; Denny, Christopher J
2008-11-01
Emergency department (ED) resuscitation requires the coordinated efforts of an interdisciplinary team. Human errors are common and have a negative impact on patient safety. Although crisis resource management (CRM) skills are utilized in other clinical domains, most emergency medicine (EM) caregivers currently receive no formal CRM training. The objectives were to compile and compare attitudes toward CRM training among EM staff physicians, nurses, and residents at two Canadian academic teaching hospitals. Emergency physicians (EPs), residents, and nurses were asked to complete a Web survey that included Likert scales and short answer questions. Focus groups and pilot testing were used to inform survey development. Thematic content analysis was performed on the qualitative data set and compared to quantitative results. The response rate was 75.7% (N = 84). There was strong consensus regarding the importance of core CRM principles (i.e., effective communication, team leadership, resource utilization, problem-solving, situational awareness) in ED resuscitation. Problems with coordinating team actions (58.8%), communication (69.6%), and establishing priorities (41.3%) were among factors implicated in adverse events. Interdisciplinary collaboration (95.1%), efficiency of patient care (83.9%), and decreased medical error (82.6%) were proposed benefits of CRM training. Communication between disciplines is a barrier to effective ED resuscitation for 94.4% of nurses and 59.7% of EPs (p = 0.008). Residents reported a lack of exposure to (64.3%), yet had interest in (96.4%) formal CRM education using human patient simulation. Nurses rate communication as a barrier to teamwork more frequently than physicians. EM residents are keen to learn CRM skills. An opportunity exists to create a novel interdisciplinary CRM curriculum to improve EM team performance and mitigate human error.
Völker, Martin; Fiederer, Lukas D J; Berberich, Sofie; Hammer, Jiří; Behncke, Joos; Kršek, Pavel; Tomášek, Martin; Marusič, Petr; Reinacher, Peter C; Coenen, Volker A; Helias, Moritz; Schulze-Bonhage, Andreas; Burgard, Wolfram; Ball, Tonio
2018-06-01
Error detection in motor behavior is a fundamental cognitive function heavily relying on local cortical information processing. Neural activity in the high-gamma frequency band (HGB) closely reflects such local cortical processing, but little is known about its role in error processing, particularly in the healthy human brain. Here we characterize the error-related response of the human brain based on data obtained with noninvasive EEG optimized for HGB mapping in 31 healthy subjects (15 females, 16 males), and additional intracranial EEG data from 9 epilepsy patients (4 females, 5 males). Our findings reveal a multiscale picture of the global and local dynamics of error-related HGB activity in the human brain. On the global level as reflected in the noninvasive EEG, the error-related response started with an early component dominated by anterior brain regions, followed by a shift to parietal regions, and a subsequent phase characterized by sustained parietal HGB activity. This phase lasted for more than 1 s after the error onset. On the local level reflected in the intracranial EEG, a cascade of both transient and sustained error-related responses involved an even more extended network, spanning beyond frontal and parietal regions to the insula and the hippocampus. HGB mapping appeared especially well suited to investigate late, sustained components of the error response, possibly linked to downstream functional stages such as error-related learning and behavioral adaptation. Our findings establish the basic spatio-temporal properties of HGB activity as a neural correlate of error processing, complementing traditional error-related potential studies. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Hydrograph matching method for measuring model performance
NASA Astrophysics Data System (ADS)
Ewen, John
2011-09-01
SummaryDespite all the progress made over the years on developing automatic methods for analysing hydrographs and measuring the performance of rainfall-runoff models, automatic methods cannot yet match the power and flexibility of the human eye and brain. Very simple approaches are therefore being developed that mimic the way hydrologists inspect and interpret hydrographs, including the way that patterns are recognised, links are made by eye, and hydrological responses and errors are studied and remembered. In this paper, a dynamic programming algorithm originally designed for use in data mining is customised for use with hydrographs. It generates sets of "rays" that are analogous to the visual links made by the hydrologist's eye when linking features or times in one hydrograph to the corresponding features or times in another hydrograph. One outcome from this work is a new family of performance measures called "visual" performance measures. These can measure differences in amplitude and timing, including the timing errors between simulated and observed hydrographs in model calibration. To demonstrate this, two visual performance measures, one based on the Nash-Sutcliffe Efficiency and the other on the mean absolute error, are used in a total of 34 split-sample calibration-validation tests for two rainfall-runoff models applied to the Hodder catchment, northwest England. The customised algorithm, called the Hydrograph Matching Algorithm, is very simple to apply; it is given in a few lines of pseudocode.
Hickey, Edward J; Nosikova, Yaroslavna; Pham-Hung, Eric; Gritti, Michael; Schwartz, Steven; Caldarone, Christopher A; Redington, Andrew; Van Arsdell, Glen S
2015-02-01
We hypothesized that the National Aeronautics and Space Administration "threat and error" model (which is derived from analyzing >30,000 commercial flights, and explains >90% of crashes) is directly applicable to pediatric cardiac surgery. We implemented a unit-wide performance initiative, whereby every surgical admission constitutes a "flight" and is tracked in real time, with the aim of identifying errors. The first 500 consecutive patients (524 flights) were analyzed, with an emphasis on the relationship between error cycles and permanent harmful outcomes. Among 524 patient flights (risk adjustment for congenital heart surgery category: 1-6; median: 2) 68 (13%) involved residual hemodynamic lesions, 13 (2.5%) permanent end-organ injuries, and 7 deaths (1.3%). Preoperatively, 763 threats were identified in 379 (72%) flights. Only 51% of patient flights (267) were error free. In the remaining 257 flights, 430 errors occurred, most commonly related to proficiency (280; 65%) or judgment (69, 16%). In most flights with errors (173 of 257; 67%), an unintended clinical state resulted, ie, the error was consequential. In 60% of consequential errors (n = 110; 21% of total), subsequent cycles of additional error/unintended states occurred. Cycles, particularly those containing multiple errors, were very significantly associated with permanent harmful end-states, including residual hemodynamic lesions (P < .0001), end-organ injury (P < .0001), and death (P < .0001). Deaths were almost always preceded by cycles (6 of 7; P < .0001). Human error, if not mitigated, often leads to cycles of error and unintended patient states, which are dangerous and precede the majority of harmful outcomes. Efforts to manage threats and error cycles (through crew resource management techniques) are likely to yield large increases in patient safety. Copyright © 2015. Published by Elsevier Inc.
Development of an errorable car-following driver model
NASA Astrophysics Data System (ADS)
Yang, H.-H.; Peng, H.
2010-06-01
An errorable car-following driver model is presented in this paper. An errorable driver model is one that emulates human driver's functions and can generate both nominal (error-free), as well as devious (with error) behaviours. This model was developed for evaluation and design of active safety systems. The car-following data used for developing and validating the model were obtained from a large-scale naturalistic driving database. The stochastic car-following behaviour was first analysed and modelled as a random process. Three error-inducing behaviours were then introduced. First, human perceptual limitation was studied and implemented. Distraction due to non-driving tasks was then identified based on the statistical analysis of the driving data. Finally, time delay of human drivers was estimated through a recursive least-square identification process. By including these three error-inducing behaviours, rear-end collisions with the lead vehicle could occur. The simulated crash rate was found to be similar but somewhat higher than that reported in traffic statistics.
Suffering in Silence: Medical Error and its Impact on Health Care Providers.
Robertson, Jennifer J; Long, Brit
2018-04-01
All humans are fallible. Because physicians are human, unintentional errors unfortunately occur. While unintentional medical errors have an impact on patients and their families, they may also contribute to adverse mental and emotional effects on the involved provider(s). These may include burnout, lack of concentration, poor work performance, posttraumatic stress disorder, depression, and even suicidality. The objectives of this article are to 1) discuss the impact medical error has on involved provider(s), 2) provide potential reasons why medical error can have a negative impact on provider mental health, and 3) suggest solutions for providers and health care organizations to recognize and mitigate the adverse effects medical error has on providers. Physicians and other providers may feel a variety of adverse emotions after medical error, including guilt, shame, anxiety, fear, and depression. It is thought that the pervasive culture of perfectionism and individual blame in medicine plays a considerable role toward these negative effects. In addition, studies have found that despite physicians' desire for support after medical error, many physicians feel a lack of personal and administrative support. This may further contribute to poor emotional well-being. Potential solutions in the literature are proposed, including provider counseling, learning from mistakes without fear of punishment, discussing mistakes with others, focusing on the system versus the individual, and emphasizing provider wellness. Much of the reviewed literature is limited in terms of an emergency medicine focus or even regarding physicians in general. In addition, most studies are survey- or interview-based, which limits objectivity. While additional, more objective research is needed in terms of mitigating the effects of error on physicians, this review may help provide insight and support for those who feel alone in their attempt to heal after being involved in an adverse medical event. Unintentional medical error will likely always be a part of the medical system. However, by focusing on provider as well as patient health, we may be able to foster resilience in providers and improve care for patients in healthy, safe, and constructive environments. Copyright © 2017 Elsevier Inc. All rights reserved.
Optimized universal color palette design for error diffusion
NASA Astrophysics Data System (ADS)
Kolpatzik, Bernd W.; Bouman, Charles A.
1995-04-01
Currently, many low-cost computers can only simultaneously display a palette of 256 color. However, this palette is usually selectable from a very large gamut of available colors. For many applications, this limited palette size imposes a significant constraint on the achievable image quality. We propose a method for designing an optimized universal color palette for use with halftoning methods such as error diffusion. The advantage of a universal color palette is that it is fixed and therefore allows multiple images to be displayed simultaneously. To design the palette, we employ a new vector quantization method known as sequential scalar quantization (SSQ) to allocate the colors in a visually uniform color space. The SSQ method achieves near-optimal allocation, but may be efficiently implemented using a series of lookup tables. When used with error diffusion, SSQ adds little computational overhead and may be used to minimize the visual error in an opponent color coordinate system. We compare the performance of the optimized algorithm to standard error diffusion by evaluating a visually weighted mean-squared-error measure. Our metric is based on the color difference in CIE L*AL*B*, but also accounts for the lowpass characteristic of human contrast sensitivity.
Lost in Translation: the Case for Integrated Testing
NASA Technical Reports Server (NTRS)
Young, Aaron
2017-01-01
The building of a spacecraft is complex and often involves multiple suppliers and companies that have their own designs and processes. Standards have been developed across the industries to reduce the chances for critical flight errors at the system level, but the spacecraft is still vulnerable to the introduction of critical errors during integration of these systems. Critical errors can occur at any time during the process and in many cases, human reliability analysis (HRA) identifies human error as a risk driver. Most programs have a test plan in place that is intended to catch these errors, but it is not uncommon for schedule and cost stress to result in less testing than initially planned. Therefore, integrated testing, or "testing as you fly," is essential as a final check on the design and assembly to catch any errors prior to the mission. This presentation will outline the unique benefits of integrated testing by catching critical flight errors that can otherwise go undetected, discuss HRA methods that are used to identify opportunities for human error, lessons learned and challenges over ownership of testing will be discussed.
Human factors in aircraft incidents - Results of a 7-year study (Andre Allard Memorial Lecture)
NASA Technical Reports Server (NTRS)
Billings, C. E.; Reynard, W. D.
1984-01-01
It is pointed out that nearly all fatal aircraft accidents are preventable, and that most such accidents are due to human error. The present discussion is concerned with the results of a seven-year study of the data collected by the NASA Aviation Safety Reporting System (ASRS). The Aviation Safety Reporting System was designed to stimulate as large a flow as possible of information regarding errors and operational problems in the conduct of air operations. It was implemented in April, 1976. In the following 7.5 years, 35,000 reports have been received from pilots, controllers, and the armed forces. Human errors are found in more than 80 percent of these reports. Attention is given to the types of events reported, possible causal factors in incidents, the relationship of incidents and accidents, and sources of error in the data. ASRS reports include sufficient detail to permit authorities to institute changes in the national aviation system designed to minimize the likelihood of human error, and to insulate the system against the effects of errors.
A Unified Model of Performance: Validation of its Predictions across Different Sleep/Wake Schedules.
Ramakrishnan, Sridhar; Wesensten, Nancy J; Balkin, Thomas J; Reifman, Jaques
2016-01-01
Historically, mathematical models of human neurobehavioral performance developed on data from one sleep study were limited to predicting performance in similar studies, restricting their practical utility. We recently developed a unified model of performance (UMP) to predict the effects of the continuum of sleep loss-from chronic sleep restriction (CSR) to total sleep deprivation (TSD) challenges-and validated it using data from two studies of one laboratory. Here, we significantly extended this effort by validating the UMP predictions across a wide range of sleep/wake schedules from different studies and laboratories. We developed the UMP on psychomotor vigilance task (PVT) lapse data from one study encompassing four different CSR conditions (7 d of 3, 5, 7, and 9 h of sleep/night), and predicted performance in five other studies (from four laboratories), including different combinations of TSD (40 to 88 h), CSR (2 to 6 h of sleep/night), control (8 to 10 h of sleep/night), and nap (nocturnal and diurnal) schedules. The UMP accurately predicted PVT performance trends across 14 different sleep/wake conditions, yielding average prediction errors between 7% and 36%, with the predictions lying within 2 standard errors of the measured data 87% of the time. In addition, the UMP accurately predicted performance impairment (average error of 15%) for schedules (TSD and naps) not used in model development. The unified model of performance can be used as a tool to help design sleep/wake schedules to optimize the extent and duration of neurobehavioral performance and to accelerate recovery after sleep loss. © 2016 Associated Professional Sleep Societies, LLC.
Human factors in surgery: from Three Mile Island to the operating room.
D'Addessi, Alessandro; Bongiovanni, Luca; Volpe, Andrea; Pinto, Francesco; Bassi, PierFrancesco
2009-01-01
Human factors is a definition that includes the science of understanding the properties of human capability, the application of this understanding to the design and development of systems and services, the art of ensuring their successful applications to a program. The field of human factors traces its origins to the Second World War, but Three Mile Island has been the best example of how groups of people react and make decisions under stress: this nuclear accident was exacerbated by wrong decisions made because the operators were overwhelmed with irrelevant, misleading or incorrect information. Errors and their nature are the same in all human activities. The predisposition for error is so intrinsic to human nature that scientifically it is best considered as inherently biologic. The causes of error in medical care may not be easily generalized. Surgery differs in important ways: most errors occur in the operating room and are technical in nature. Commonly, surgical error has been thought of as the consequence of lack of skill or ability, and is the result of thoughtless actions. Moreover the 'operating theatre' has a unique set of team dynamics: professionals from multiple disciplines are required to work in a closely coordinated fashion. This complex environment provides multiple opportunities for unclear communication, clashing motivations, errors arising not from technical incompetence but from poor interpersonal skills. Surgeons have to work closely with human factors specialists in future studies. By improving processes already in place in many operating rooms, safety will be enhanced and quality increased.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-05
... to reduce human error and organizational failure. On September 14, 2011, the Bureau of Ocean Energy... system based on API RP 75, owners and operators would be required to formulate policy and objectives... safety and environmental records, encourage the use of performance-based operating practices, and...
Traveling Salesman Problem: A Foveating Pyramid Model
ERIC Educational Resources Information Center
Pizlo, Zygmunt; Stefanov, Emil; Saalweachter, John; Li, Zheng; Haxhimusa, Yll; Kropatsch, Walter G.
2006-01-01
We tested human performance on the Euclidean Traveling Salesman Problem using problems with 6-50 cities. Results confirmed our earlier findings that: (a) the time of solving a problem is proportional to the number of cities, and (b) the solution error grows very slowly with the number of cities. We formulated a new version of a pyramid model. The…
The Human Factors Analysis and Classification System : HFACS : final report.
DOT National Transportation Integrated Search
2000-02-01
Human error has been implicated in 70 to 80% of all civil and military aviation accidents. Yet, most accident reporting systems are not designed around any theoretical framework of human error. As a result, most accident databases are not conducive t...
Subiaul, Francys; Patterson, Eric M; Schilder, Brian; Renner, Elizabeth; Barr, Rachel
2015-11-01
In contrast to other primates, human children's imitation performance goes from low to high fidelity soon after infancy. Are such changes associated with the development of other forms of learning? We addressed this question by testing 215 children (26-59 months) on two social conditions (imitation, emulation) - involving a demonstration - and two asocial conditions (trial-and-error, recall) - involving individual learning - using two touchscreen tasks. The tasks required responding to either three different pictures in a specific picture order (Cognitive: Airplane→Ball→Cow) or three identical pictures in a specific spatial order (Motor-Spatial: Up→Down→Right). There were age-related improvements across all conditions and imitation, emulation and recall performance were significantly better than trial-and-error learning. Generalized linear models demonstrated that motor-spatial imitation fidelity was associated with age and motor-spatial emulation performance, but cognitive imitation fidelity was only associated with age. While this study provides evidence for multiple imitation mechanisms, the development of one of those mechanisms - motor-spatial imitation - may be bootstrapped by the development of another social learning skill - motor-spatial emulation. Together, these findings provide important clues about the development of imitation, which is arguably a distinctive feature of the human species. © 2014 John Wiley & Sons Ltd.
Kaneko, Takaaki; Tomonaga, Masaki
2014-06-01
Humans are often unaware of how they control their limb motor movements. People pay attention to their own motor movements only when their usual motor routines encounter errors. Yet little is known about the extent to which voluntary actions rely on automatic control and when automatic control shifts to deliberate control in nonhuman primates. In this study, we demonstrate that chimpanzees and humans showed similar limb motor adjustment in response to feedback error during reaching actions, whereas attentional allocation inferred from gaze behavior differed. We found that humans shifted attention to their own motor kinematics as errors were induced in motor trajectory feedback regardless of whether the errors actually disrupted their reaching their action goals. In contrast, chimpanzees shifted attention to motor execution only when errors actually interfered with their achieving a planned action goal. These results indicate that the species differed in their criteria for shifting from automatic to deliberate control of motor actions. It is widely accepted that sophisticated motor repertoires have evolved in humans. Our results suggest that the deliberate monitoring of one's own motor kinematics may have evolved in the human lineage. Copyright © 2014 Elsevier B.V. All rights reserved.
Preventable Medical Errors Driven Modeling of Medical Best Practice Guidance Systems.
Ou, Andrew Y-Z; Jiang, Yu; Wu, Po-Liang; Sha, Lui; Berlin, Richard B
2017-01-01
In a medical environment such as Intensive Care Unit, there are many possible reasons to cause errors, and one important reason is the effect of human intellectual tasks. When designing an interactive healthcare system such as medical Cyber-Physical-Human Systems (CPHSystems), it is important to consider whether the system design can mitigate the errors caused by these tasks or not. In this paper, we first introduce five categories of generic intellectual tasks of humans, where tasks among each category may lead to potential medical errors. Then, we present an integrated modeling framework to model a medical CPHSystem and use UPPAAL as the foundation to integrate and verify the whole medical CPHSystem design models. With a verified and comprehensive model capturing the human intellectual tasks effects, we can design a more accurate and acceptable system. We use a cardiac arrest resuscitation guidance and navigation system (CAR-GNSystem) for such medical CPHSystem modeling. Experimental results show that the CPHSystem models help determine system design flaws and can mitigate the potential medical errors caused by the human intellectual tasks.
Catching errors with patient-specific pretreatment machine log file analysis.
Rangaraj, Dharanipathy; Zhu, Mingyao; Yang, Deshan; Palaniswaamy, Geethpriya; Yaddanapudi, Sridhar; Wooten, Omar H; Brame, Scott; Mutic, Sasa
2013-01-01
A robust, efficient, and reliable quality assurance (QA) process is highly desired for modern external beam radiation therapy treatments. Here, we report the results of a semiautomatic, pretreatment, patient-specific QA process based on dynamic machine log file analysis clinically implemented for intensity modulated radiation therapy (IMRT) treatments delivered by high energy linear accelerators (Varian 2100/2300 EX, Trilogy, iX-D, Varian Medical Systems Inc, Palo Alto, CA). The multileaf collimator machine (MLC) log files are called Dynalog by Varian. Using an in-house developed computer program called "Dynalog QA," we automatically compare the beam delivery parameters in the log files that are generated during pretreatment point dose verification measurements, with the treatment plan to determine any discrepancies in IMRT deliveries. Fluence maps are constructed and compared between the delivered and planned beams. Since clinical introduction in June 2009, 912 machine log file analyses QA were performed by the end of 2010. Among these, 14 errors causing dosimetric deviation were detected and required further investigation and intervention. These errors were the result of human operating mistakes, flawed treatment planning, and data modification during plan file transfer. Minor errors were also reported in 174 other log file analyses, some of which stemmed from false positives and unreliable results; the origins of these are discussed herein. It has been demonstrated that the machine log file analysis is a robust, efficient, and reliable QA process capable of detecting errors originating from human mistakes, flawed planning, and data transfer problems. The possibility of detecting these errors is low using point and planar dosimetric measurements. Copyright © 2013 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.
The Use Of Computational Human Performance Modeling As Task Analysis Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacuqes Hugo; David Gertman
2012-07-01
During a review of the Advanced Test Reactor safety basis at the Idaho National Laboratory, human factors engineers identified ergonomic and human reliability risks involving the inadvertent exposure of a fuel element to the air during manual fuel movement and inspection in the canal. There were clear indications that these risks increased the probability of human error and possible severe physical outcomes to the operator. In response to this concern, a detailed study was conducted to determine the probability of the inadvertent exposure of a fuel element. Due to practical and safety constraints, the task network analysis technique was employedmore » to study the work procedures at the canal. Discrete-event simulation software was used to model the entire procedure as well as the salient physical attributes of the task environment, such as distances walked, the effect of dropped tools, the effect of hazardous body postures, and physical exertion due to strenuous tool handling. The model also allowed analysis of the effect of cognitive processes such as visual perception demands, auditory information and verbal communication. The model made it possible to obtain reliable predictions of operator performance and workload estimates. It was also found that operator workload as well as the probability of human error in the fuel inspection and transfer task were influenced by the concurrent nature of certain phases of the task and the associated demand on cognitive and physical resources. More importantly, it was possible to determine with reasonable accuracy the stages as well as physical locations in the fuel handling task where operators would be most at risk of losing their balance and falling into the canal. The model also provided sufficient information for a human reliability analysis that indicated that the postulated fuel exposure accident was less than credible.« less
Yang, Shu-Hui; Jerng, Jih-Shuin; Chen, Li-Chin; Li, Yu-Tsu; Huang, Hsiao-Fang; Wu, Chao-Ling; Chan, Jing-Yuan; Huang, Szu-Fen; Liang, Huey-Wen; Sun, Jui-Sheng
2017-11-03
Intra-hospital transportation (IHT) might compromise patient safety because of different care settings and higher demand on the human operation. Reports regarding the incidence of IHT-related patient safety events and human failures remain limited. To perform a retrospective analysis of IHT-related events, human failures and unsafe acts. A hospital-wide process for the IHT and database from the incident reporting system in a medical centre in Taiwan. All eligible IHT-related patient safety events between January 2010 to December 2015 were included. Incidence rate of IHT-related patient safety events, human failure modes, and types of unsafe acts. There were 206 patient safety events in 2 009 013 IHT sessions (102.5 per 1 000 000 sessions). Most events (n=148, 71.8%) did not involve patient harm, and process events (n=146, 70.9%) were most common. Events at the location of arrival (n=101, 49.0%) were most frequent; this location accounted for 61.0% and 44.2% of events with patient harm and those without harm, respectively (p<0.001). Of the events with human failures (n=186), the most common related process step was the preparation of the transportation team (n=91, 48.9%). Contributing unsafe acts included perceptual errors (n=14, 7.5%), decision errors (n=56, 30.1%), skill-based errors (n=48, 25.8%), and non-compliance (n=68, 36.6%). Multivariate analysis showed that human failure found in the arrival and hand-off sub-process (OR 4.84, p<0.001) was associated with increased patient harm, whereas the presence of omission (OR 0.12, p<0.001) was associated with less patient harm. This study shows a need to reduce human failures to prevent patient harm during intra-hospital transportation. We suggest that the transportation team pay specific attention to the sub-process at the location of arrival and prevent errors other than omissions. Long-term monitoring of IHT-related events is also warranted. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Development and Evaluation of Algorithms for Breath Alcohol Screening.
Ljungblad, Jonas; Hök, Bertil; Ekström, Mikael
2016-04-01
Breath alcohol screening is important for traffic safety, access control and other areas of health promotion. A family of sensor devices useful for these purposes is being developed and evaluated. This paper is focusing on algorithms for the determination of breath alcohol concentration in diluted breath samples using carbon dioxide to compensate for the dilution. The examined algorithms make use of signal averaging, weighting and personalization to reduce estimation errors. Evaluation has been performed by using data from a previously conducted human study. It is concluded that these features in combination will significantly reduce the random error compared to the signal averaging algorithm taken alone.
Pavone, Enea Francesco; Tieri, Gaetano; Rizza, Giulia; Tidoni, Emmanuele; Grisoni, Luigi; Aglioti, Salvatore Maria
2016-01-13
Brain monitoring of errors in one's own and other's actions is crucial for a variety of processes, ranging from the fine-tuning of motor skill learning to important social functions, such as reading out and anticipating the intentions of others. Here, we combined immersive virtual reality and EEG recording to explore whether embodying the errors of an avatar by seeing it from a first-person perspective may activate the error monitoring system in the brain of an onlooker. We asked healthy participants to observe, from a first- or third-person perspective, an avatar performing a correct or an incorrect reach-to-grasp movement toward one of two virtual mugs placed on a table. At the end of each trial, participants reported verbally how much they embodied the avatar's arm. Ratings were maximal in first-person perspective, indicating that immersive virtual reality can be a powerful tool to induce embodiment of an artificial agent, even through mere visual perception and in the absence of any cross-modal boosting. Observation of erroneous grasping from a first-person perspective enhanced error-related negativity and medial-frontal theta power in the trials where human onlookers embodied the virtual character, hinting at the tight link between early, automatic coding of error detection and sense of embodiment. Error positivity was similar in 1PP and 3PP, suggesting that conscious coding of errors is similar for self and other. Thus, embodiment plays an important role in activating specific components of the action monitoring system when others' errors are coded as if they are one's own errors. Detecting errors in other's actions is crucial for social functions, such as reading out and anticipating the intentions of others. Using immersive virtual reality and EEG recording, we explored how the brain of an onlooker reacted to the errors of an avatar seen from a first-person perspective. We found that mere observation of erroneous actions enhances electrocortical markers of error detection in the trials where human onlookers embodied the virtual character. Thus, the cerebral system for action monitoring is maximally activated when others' errors are coded as if they are one's own errors. The results have important implications for understanding how the brain can control the external world and thus creating new brain-computer interfaces. Copyright © 2016 the authors 0270-6474/16/360268-12$15.00/0.
Trial-by-trial adaptation of movements during mental practice under force field.
Anwar, Muhammad Nabeel; Khan, Salman Hameed
2013-01-01
Human nervous system tries to minimize the effect of any external perturbing force by bringing modifications in the internal model. These modifications affect the subsequent motor commands generated by the nervous system. Adaptive compensation along with the appropriate modifications of internal model helps in reducing human movement errors. In the current study, we studied how motor imagery influences trial-to-trial learning in a robot-based adaptation task. Two groups of subjects performed reaching movements with or without motor imagery in a velocity-dependent force field. The results show that reaching movements performed with motor imagery have relatively a more focused generalization pattern and a higher learning rate in training direction.
NASA Technical Reports Server (NTRS)
Wiener, Earl L. (Editor); Nagel, David C. (Editor)
1988-01-01
The fundamental principles of human-factors (HF) analysis for aviation applications are examined in a collection of reviews by leading experts, with an emphasis on recent developments. The aim is to provide information and guidance to the aviation community outside the HF field itself. Topics addressed include the systems approach to HF, system safety considerations, the human senses in flight, information processing, aviation workloads, group interaction and crew performance, flight training and simulation, human error in aviation operations, and aircrew fatigue and circadian rhythms. Also discussed are pilot control; aviation displays; cockpit automation; HF aspects of software interfaces; the design and integration of cockpit-crew systems; and HF issues for airline pilots, general aviation, helicopters, and ATC.
NASA Astrophysics Data System (ADS)
Huber, Matthew S.; Ferriãre, Ludovic; Losiak, Anna; Koeberl, Christian
2011-09-01
Abstract- Planar deformation features (PDFs) in quartz, one of the most commonly used diagnostic indicators of shock metamorphism, are planes of amorphous material that follow crystallographic orientations, and can thus be distinguished from non-shock-induced fractures in quartz. The process of indexing data for PDFs from universal-stage measurements has traditionally been performed using a manual graphical method, a time-consuming process in which errors can easily be introduced. A mathematical method and computer algorithm, which we call the Automated Numerical Index Executor (ANIE) program for indexing PDFs, was produced, and is presented here. The ANIE program is more accurate and faster than the manual graphical determination of Miller-Bravais indices, as it allows control of the exact error used in the calculation and removal of human error from the process.
NASA Technical Reports Server (NTRS)
Morey, Susan; Prevot, Thomas; Mercer, Joey; Martin, Lynne; Bienert, Nancy; Cabrall, Christopher; Hunt, Sarah; Homola, Jeffrey; Kraut, Joshua
2013-01-01
A human-in-the-loop simulation was conducted to examine the effects of varying levels of trajectory prediction uncertainty on air traffic controller workload and performance, as well as how strategies and the use of decision support tools change in response. This paper focuses on the strategies employed by two controllers from separate teams who worked in parallel but independently under identical conditions (airspace, arrival traffic, tools) with the goal of ensuring schedule conformance and safe separation for a dense arrival flow in en route airspace. Despite differences in strategy and methods, both controllers achieved high levels of schedule conformance and safe separation. Overall, results show that trajectory uncertainties introduced by wind and aircraft performance prediction errors do not affect the controllers' ability to manage traffic. Controller strategies were fairly robust to changes in error, though strategies were affected by the amount of delay to absorb (scheduled time of arrival minus estimated time of arrival). Using the results and observations, this paper proposes an ability to dynamically customize the display of information including delay time based on observed error to better accommodate different strategies and objectives.
Bentley, Johanne; Diggle, Christine P.; Harnden, Patricia; Knowles, Margaret A.; Kiltie, Anne E.
2004-01-01
In human cells DNA double strand breaks (DSBs) can be repaired by the non-homologous end-joining (NHEJ) pathway. In a background of NHEJ deficiency, DSBs with mismatched ends can be joined by an error-prone mechanism involving joining between regions of nucleotide microhomology. The majority of joins formed from a DSB with partially incompatible 3′ overhangs by cell-free extracts from human glioblastoma (MO59K) and urothelial (NHU) cell lines were accurate and produced by the overlap/fill-in of mismatched termini by NHEJ. However, repair of DSBs by extracts using tissue from four high-grade bladder carcinomas resulted in no accurate join formation. Junctions were formed by the non-random deletion of terminal nucleotides and showed a preference for annealing at a microhomology of 8 nt buried within the DNA substrate; this process was not dependent on functional Ku70, DNA-PK or XRCC4. Junctions were repaired in the same manner in MO59K extracts in which accurate NHEJ was inactivated by inhibition of Ku70 or DNA-PKcs. These data indicate that bladder tumour extracts are unable to perform accurate NHEJ such that error-prone joining predominates. Therefore, in high-grade tumours mismatched DSBs are repaired by a highly mutagenic, microhomology-mediated, alternative end-joining pathway, a process that may contribute to genomic instability observed in bladder cancer. PMID:15466592
Error management for musicians: an interdisciplinary conceptual framework
Kruse-Weber, Silke; Parncutt, Richard
2014-01-01
Musicians tend to strive for flawless performance and perfection, avoiding errors at all costs. Dealing with errors while practicing or performing is often frustrating and can lead to anger and despair, which can explain musicians’ generally negative attitude toward errors and the tendency to aim for flawless learning in instrumental music education. But even the best performances are rarely error-free, and research in general pedagogy and psychology has shown that errors provide useful information for the learning process. Research in instrumental pedagogy is still neglecting error issues; the benefits of risk management (before the error) and error management (during and after the error) are still underestimated. It follows that dealing with errors is a key aspect of music practice at home, teaching, and performance in public. And yet, to be innovative, or to make their performance extraordinary, musicians need to risk errors. Currently, most music students only acquire the ability to manage errors implicitly – or not at all. A more constructive, creative, and differentiated culture of errors would balance error tolerance and risk-taking against error prevention in ways that enhance music practice and music performance. The teaching environment should lay the foundation for the development of such an approach. In this contribution, we survey recent research in aviation, medicine, economics, psychology, and interdisciplinary decision theory that has demonstrated that specific error-management training can promote metacognitive skills that lead to better adaptive transfer and better performance skills. We summarize how this research can be applied to music, and survey-relevant research that is specifically tailored to the needs of musicians, including generic guidelines for risk and error management in music teaching and performance. On this basis, we develop a conceptual framework for risk management that can provide orientation for further music education and musicians at all levels. PMID:25120501
Error management for musicians: an interdisciplinary conceptual framework.
Kruse-Weber, Silke; Parncutt, Richard
2014-01-01
Musicians tend to strive for flawless performance and perfection, avoiding errors at all costs. Dealing with errors while practicing or performing is often frustrating and can lead to anger and despair, which can explain musicians' generally negative attitude toward errors and the tendency to aim for flawless learning in instrumental music education. But even the best performances are rarely error-free, and research in general pedagogy and psychology has shown that errors provide useful information for the learning process. Research in instrumental pedagogy is still neglecting error issues; the benefits of risk management (before the error) and error management (during and after the error) are still underestimated. It follows that dealing with errors is a key aspect of music practice at home, teaching, and performance in public. And yet, to be innovative, or to make their performance extraordinary, musicians need to risk errors. Currently, most music students only acquire the ability to manage errors implicitly - or not at all. A more constructive, creative, and differentiated culture of errors would balance error tolerance and risk-taking against error prevention in ways that enhance music practice and music performance. The teaching environment should lay the foundation for the development of such an approach. In this contribution, we survey recent research in aviation, medicine, economics, psychology, and interdisciplinary decision theory that has demonstrated that specific error-management training can promote metacognitive skills that lead to better adaptive transfer and better performance skills. We summarize how this research can be applied to music, and survey-relevant research that is specifically tailored to the needs of musicians, including generic guidelines for risk and error management in music teaching and performance. On this basis, we develop a conceptual framework for risk management that can provide orientation for further music education and musicians at all levels.
A simple but powerful test of perseverative search in dogs and toddlers.
Péter, András; Gergely, Anna; Topál, József; Miklósi, Ádám; Pongrácz, Péter
2015-01-01
Perseverative (A-not-B) errors during the search of a hidden object were recently described in both dogs and 10-month-old infants. It was found that ostensive cues indicating a communicative intent of the person who hides the object played a major role in eliciting perseverative errors in both species. However, the employed experimental set-up gave rise to several alternative explanations regarding the source of these errors. Here we present a simplified protocol that eliminates the ambiguities present in the original design. Using five consecutive object hiding events to one of two locations in a fixed order ("AABBA"), we tested adult companion dogs and human children (24 months old). The experimenter performed the hiding actions while giving ostensive cues in each trial and moved the target object to the given location in a straight line. Our results show that in the B trials, both 24-month-old children and dogs could not reliably find the hidden object, and their performance in the first B trials was significantly below that of any of the A trials. These results are the first to show that the tendency for perseverative errors in an ostensive-communicative context is a robust phenomenon among 2-year-old children and dogs, and not the by-product of a topographically elaborate hiding event.
Illusory conjunctions reflect the time course of the attentional blink.
Botella, Juan; Privado, Jesús; de Liaño, Beatriz Gil-Gómez; Suero, Manuel
2011-07-01
Illusory conjunctions in the time domain are binding errors for features from stimuli presented sequentially but in the same spatial position. A similar experimental paradigm is employed for the attentional blink (AB), an impairment of performance for the second of two targets when it is presented 200-500 msec after the first target. The analysis of errors along the time course of the AB allows the testing of models of illusory conjunctions. In an experiment, observers identified one (control condition) or two (experimental condition) letters in a specified color, so that illusory conjunctions in each response could be linked to specific positions in the series. Two items in the target colors (red and white, embedded in distractors of different colors) were employed in four conditions defined according to whether both targets were in the same or different colors. Besides the U-shaped function for hits, the errors were analyzed by calculating several response parameters reflecting characteristics such as the average position of the responses or the attentional suppression during the blink. The several error parameters cluster in two time courses, as would be expected from prevailing models of the AB. Furthermore, the results match the predictions from Botella, Barriopedro, and Suero's (Journal of Experimental Psychology: Human Perception and Performance, 27, 1452-1467, 2001) model for illusory conjunctions.
Improving Drive Files for Vehicle Road Simulations
NASA Astrophysics Data System (ADS)
Cherng, John G.; Goktan, Ali; French, Mark; Gu, Yi; Jacob, Anil
2001-09-01
Shaker tables are commonly used in laboratories for automotive vehicle component testing to study durability and acoustics performance. An example is development testing of car seats. However, it is difficult to repeat the measured road data perfectly with the response of a shaker table as there are basic differences in dynamic characteristics between a flexible vehicle and substantially rigid shaker table. In addition, there are performance limits in the shaker table drive systems that can limit correlation. In practice, an optimal drive signal for the actuators is created iteratively. During each iteration, the error between the road data and the response data is minimised by an optimising algorithm which is generally a part of the feed back loop of the shake table controller. This study presents a systematic investigation to the errors in time and frequency domains as well as joint time-frequency domain and an evaluation of different digital signal processing techniques that have been used in previous work. In addition, we present an innovative approach that integrates the dynamic characteristics of car seats and the human body into the error-minimising iteration process. We found that the iteration process can be shortened and the error reduced by using a weighting function created by normalising the frequency response function of the car seat. Two road data test sets were used in the study.
Hong, KyungPyo; Jeong, Eun-Kee; Wall, T. Scott; Drakos, Stavros G.; Kim, Daniel
2015-01-01
Purpose To develop and evaluate a wideband arrhythmia-insensitive-rapid (AIR) pulse sequence for cardiac T1 mapping without image artifacts induced by implantable-cardioverter-defibrillator (ICD). Methods We developed a wideband AIR pulse sequence by incorporating a saturation pulse with wide frequency bandwidth (8.9 kHz), in order to achieve uniform T1 weighting in the heart with ICD. We tested the performance of original and “wideband” AIR cardiac T1 mapping pulse sequences in phantom and human experiments at 1.5T. Results In 5 phantoms representing native myocardium and blood and post-contrast blood/tissue T1 values, compared with the control T1 values measured with an inversion-recovery pulse sequence without ICD, T1 values measured with original AIR with ICD were considerably lower (absolute percent error >29%), whereas T1 values measured with wideband AIR with ICD were similar (absolute percent error <5%). Similarly, in 11 human subjects, compared with the control T1 values measured with original AIR without ICD, T1 measured with original AIR with ICD was significantly lower (absolute percent error >10.1%), whereas T1 measured with wideband AIR with ICD was similar (absolute percent error <2.0%). Conclusion This study demonstrates the feasibility of a wideband pulse sequence for cardiac T1 mapping without significant image artifacts induced by ICD. PMID:25975192
The effect of saccade metrics on the corollary discharge contribution to perceived eye location
Bansal, Sonia; Jayet Bray, Laurence C.; Peterson, Matthew S.
2015-01-01
Corollary discharge (CD) is hypothesized to provide the movement information (direction and amplitude) required to compensate for the saccade-induced disruptions to visual input. Here, we investigated to what extent these conveyed metrics influence perceptual stability in human subjects with a target-displacement detection task. Subjects made saccades to targets located at different amplitudes (4°, 6°, or 8°) and directions (horizontal or vertical). During the saccade, the target disappeared and then reappeared at a shifted location either in the same direction or opposite to the movement vector. Subjects reported the target displacement direction, and from these reports we determined the perceptual threshold for shift detection and estimate of target location. Our results indicate that the thresholds for all amplitudes and directions generally scaled with saccade amplitude. Additionally, subjects on average produced hypometric saccades with an estimated CD gain <1. Finally, we examined the contribution of different error signals to perceptual performance, the saccade error (movement-to-movement variability in saccade amplitude) and visual error (distance between the fovea and the shifted target location). Perceptual judgment was not influenced by the fluctuations in movement amplitude, and performance was largely the same across movement directions for different magnitudes of visual error. Importantly, subjects reported the correct direction of target displacement above chance level for very small visual errors (<0.75°), even when these errors were opposite the target-shift direction. Collectively, these results suggest that the CD-based compensatory mechanisms for visual disruptions are highly accurate and comparable for saccades with different metrics. PMID:25761955
Prevention of medication errors: detection and audit.
Montesi, Germana; Lechi, Alessandro
2009-06-01
1. Medication errors have important implications for patient safety, and their identification is a main target in improving clinical practice errors, in order to prevent adverse events. 2. Error detection is the first crucial step. Approaches to this are likely to be different in research and routine care, and the most suitable must be chosen according to the setting. 3. The major methods for detecting medication errors and associated adverse drug-related events are chart review, computerized monitoring, administrative databases, and claims data, using direct observation, incident reporting, and patient monitoring. All of these methods have both advantages and limitations. 4. Reporting discloses medication errors, can trigger warnings, and encourages the diffusion of a culture of safe practice. Combining and comparing data from various and encourages the diffusion of a culture of safe practice sources increases the reliability of the system. 5. Error prevention can be planned by means of retroactive and proactive tools, such as audit and Failure Mode, Effect, and Criticality Analysis (FMECA). Audit is also an educational activity, which promotes high-quality care; it should be carried out regularly. In an audit cycle we can compare what is actually done against reference standards and put in place corrective actions to improve the performances of individuals and systems. 6. Patient safety must be the first aim in every setting, in order to build safer systems, learning from errors and reducing the human and fiscal costs.
Modeling human tracking error in several different anti-tank systems
NASA Technical Reports Server (NTRS)
Kleinman, D. L.
1981-01-01
An optimal control model for generating time histories of human tracking errors in antitank systems is outlined. Monte Carlo simulations of human operator responses for three Army antitank systems are compared. System/manipulator dependent data comparisons reflecting human operator limitations in perceiving displayed quantities and executing intended control motions are presented. Motor noise parameters are also discussed.
Vavilov, A Iu; Viter, V I
2007-01-01
Mathematical questions of data errors of modern thermometrical models of postmortem cooling of the human body are considered. The main diagnostic areas used for thermometry are analyzed to minimize these data errors. The authors propose practical recommendations to decrease data errors of determination of prescription of death coming.
Smiley, A M
1990-10-01
In February of 1986 a head-on collision occurred between a freight train and a passenger train in western Canada killing 23 people and causing over $30 million of damage. A Commission of Inquiry appointed by the Canadian government concluded that human error was the major reason for the collision. This report discusses the factors contributing to the human error: mainly poor work-rest schedules, the monotonous nature of the train driving task, insufficient information about train movements, and the inadequate backup systems in case of human error.
A Conceptual Framework for Predicting Error in Complex Human-Machine Environments
NASA Technical Reports Server (NTRS)
Freed, Michael; Remington, Roger; Null, Cynthia H. (Technical Monitor)
1998-01-01
We present a Goals, Operators, Methods, and Selection Rules-Model Human Processor (GOMS-MHP) style model-based approach to the problem of predicting human habit capture errors. Habit captures occur when the model fails to allocate limited cognitive resources to retrieve task-relevant information from memory. Lacking the unretrieved information, decision mechanisms act in accordance with implicit default assumptions, resulting in error when relied upon assumptions prove incorrect. The model helps interface designers identify situations in which such failures are especially likely.
Human systems integration in remotely piloted aircraft operations.
Tvaryanas, Anthony P
2006-12-01
The role of humans in remotely piloted aircraft (RPAs) is qualitatively different from manned aviation, lessening the applicability of aerospace medicine human factors knowledge derived from traditional cockpits. Aerospace medicine practitioners should expect to be challenged in addressing RPA crewmember performance. Human systems integration (HSI) provides a model for explaining human performance as a function of the domains of: human factors engineering; personnel; training; manpower; environment, safety, and occupational health (ESOH); habitability; and survivability. RPA crewmember performance is being particularly impacted by issues involving the domains of human factors engineering, personnel, training, manpower, ESOH, and habitability. Specific HSI challenges include: 1) changes in large RPA operator selection and training; 2) human factors engineering deficiencies in current RPA ground control station design and their impact on human error including considerations pertaining to multi-aircraft control; and 3) the combined impact of manpower shortfalls, shiftwork-related fatigue, and degraded crewmember effectiveness. Limited experience and available research makes it difficult to qualitatively or quantitatively predict the collective impact of these issues on RPA crewmember performance. Attending to HSI will be critical for the success of current and future RPA crewmembers. Aerospace medicine practitioners working with RPA crewmembers should gain first-hand knowledge of their task environment while the larger aerospace medicine community needs to address the limited information available on RPA-related aerospace medicine human factors. In the meantime, aeromedical decisions will need to be made based on what is known about other aerospace occupations, realizing this knowledge may have only partial applicability.
Performance of an online translation tool when applied to patient educational material.
Khanna, Raman R; Karliner, Leah S; Eck, Matthias; Vittinghoff, Eric; Koenig, Christopher J; Fang, Margaret C
2011-11-01
Language barriers may prevent clinicians from tailoring patient educational material to the needs of individuals with limited English proficiency. Online translation tools could fill this gap, but their accuracy is unknown. We evaluated the accuracy of an online translation tool for patient educational material. We selected 45 sentences from a pamphlet available in both English and Spanish, and translated it into Spanish using GoogleTranslate™ (GT). Three bilingual Spanish speakers then performed a blinded evaluation on these 45 sentences, comparing GT-translated sentences to those translated professionally, along four domains: fluency (grammatical correctness), adequacy (information preservation), meaning (connotation maintenance), and severity (perceived dangerousness of an error if present). In addition, evaluators indicated whether they had a preference for either the GT-translated or professionally translated sentences. The GT-translated sentences had significantly lower fluency scores compared to the professional translation (3.4 vs. 4.7, P < 0.001), but similar adequacy (4.2 vs. 4.5, P = 0.19) and meaning (4.5 vs. 4.8, P = 0.29) scores. The GT-translated sentences were more likely to have any error (39% vs. 22%, P = 0.05), but not statistically more likely to have a severe error (4% vs. 2%, P = 0.61). Evaluators preferred the professional translation for complex sentences, but not for simple ones. When applied to patient educational material, GT performed comparably to professional human translation in terms of preserving information and meaning, though it was slightly worse in preserving grammar. In situations where professional human translations are unavailable or impractical, online translation may someday fill an important niche. Copyright © 2011 Society of Hospital Medicine.
Rong, Hao; Tian, Jin
2015-05-01
The study contributes to human reliability analysis (HRA) by proposing a method that focuses more on human error causality within a sociotechnical system, illustrating its rationality and feasibility by using a case of the Minuteman (MM) III missile accident. Due to the complexity and dynamics within a sociotechnical system, previous analyses of accidents involving human and organizational factors clearly demonstrated that the methods using a sequential accident model are inadequate to analyze human error within a sociotechnical system. System-theoretic accident model and processes (STAMP) was used to develop a universal framework of human error causal analysis. To elaborate the causal relationships and demonstrate the dynamics of human error, system dynamics (SD) modeling was conducted based on the framework. A total of 41 contributing factors, categorized into four types of human error, were identified through the STAMP-based analysis. All factors are related to a broad view of sociotechnical systems, and more comprehensive than the causation presented in the accident investigation report issued officially. Recommendations regarding both technical and managerial improvement for a lower risk of the accident are proposed. The interests of an interdisciplinary approach provide complementary support between system safety and human factors. The integrated method based on STAMP and SD model contributes to HRA effectively. The proposed method will be beneficial to HRA, risk assessment, and control of the MM III operating process, as well as other sociotechnical systems. © 2014, Human Factors and Ergonomics Society.
2014 Space Human Factors Engineering Standing Review Panel
NASA Technical Reports Server (NTRS)
Steinberg, Susan
2014-01-01
The 2014 Space Human Factors Engineering (SHFE) Standing Review Panel (from here on referred to as the SRP) participated in a WebEx/teleconference with members of the Space Human Factors and Habitability (SHFH) Element, representatives from the Human Research Program (HRP), the National Space Biomedical Research Institute (NSBRI), and NASA Headquarters on November 17, 2014 (list of participants is in Section XI of this report). The SRP reviewed the updated research plans for the Risk of Incompatible Vehicle/Habitat Design (HAB Risk) and the Risk of Performance Errors Due to Training Deficiencies (Train Risk). The SRP also received a status update on the Risk of Inadequate Critical Task Design (Task Risk), the Risk of Inadequate Design of Human and Automation/Robotic Integration (HARI Risk), and the Risk of Inadequate Human-Computer Interaction (HCI Risk).
Richardson, Ashley K; Mitchell, Andrew C S; Hughes, Gerwyn
2017-02-01
This study aimed to examine the effect of the impact point on the golf ball on the horizontal launch angle and side spin during putting with a mechanical putting arm and human participants. Putts of 3.2 m were completed with a mechanical putting arm (four putter-ball combinations, total of 160 trials) and human participants (two putter-ball combinations, total of 337 trials). The centre of the dimple pattern (centroid) was located and the following variables were measured: distance and angle of the impact point from the centroid and surface area of the impact zone. Multiple regression analysis was conducted to identify whether impact variables had significant associations with ball roll variables, horizontal launch angle and side spin. Significant associations were identified between impact variables and horizontal launch angle with the mechanical putting arm but this was not replicated with human participants. The variability caused by "dimple error" was minimal with the mechanical putting arm and not evident with human participants. Differences between the mechanical putting arm and human participants may be due to the way impulse is imparted on the ball. Therefore it is concluded that variability of impact point on the golf ball has a minimal effect on putting performance.
Zhang, Juanjuan; Collins, Steven H.
2017-01-01
This study uses theory and experiments to investigate the relationship between the passive stiffness of series elastic actuators and torque tracking performance in lower-limb exoskeletons during human walking. Through theoretical analysis with our simplified system model, we found that the optimal passive stiffness matches the slope of the desired torque-angle relationship. We also conjectured that a bandwidth limit resulted in a maximum rate of change in torque error that can be commanded through control input, which is fixed across desired and passive stiffness conditions. This led to hypotheses about the interactions among optimal control gains, passive stiffness and desired quasi-stiffness. Walking experiments were conducted with multiple angle-based desired torque curves. The observed lowest torque tracking errors identified for each combination of desired and passive stiffnesses were shown to be linearly proportional to the magnitude of the difference between the two stiffnesses. The proportional gains corresponding to the lowest observed errors were seen inversely proportional to passive stiffness values and to desired stiffness. These findings supported our hypotheses, and provide guidance to application-specific hardware customization as well as controller design for torque-controlled robotic legged locomotion. PMID:29326580
Correction techniques for depth errors with stereo three-dimensional graphic displays
NASA Technical Reports Server (NTRS)
Parrish, Russell V.; Holden, Anthony; Williams, Steven P.
1992-01-01
Three-dimensional (3-D), 'real-world' pictorial displays that incorporate 'true' depth cues via stereopsis techniques have proved effective for displaying complex information in a natural way to enhance situational awareness and to improve pilot/vehicle performance. In such displays, the display designer must map the depths in the real world to the depths available with the stereo display system. However, empirical data have shown that the human subject does not perceive the information at exactly the depth at which it is mathematically placed. Head movements can also seriously distort the depth information that is embedded in stereo 3-D displays because the transformations used in mapping the visual scene to the depth-viewing volume (DVV) depend intrinsically on the viewer location. The goal of this research was to provide two correction techniques; the first technique corrects the original visual scene to the DVV mapping based on human perception errors, and the second (which is based on head-positioning sensor input data) corrects for errors induced by head movements. Empirical data are presented to validate both correction techniques. A combination of the two correction techniques effectively eliminates the distortions of depth information embedded in stereo 3-D displays.
Stanton, Neville A; Harvey, Catherine
2017-02-01
Risk assessments in Sociotechnical Systems (STS) tend to be based on error taxonomies, yet the term 'human error' does not sit easily with STS theories and concepts. A new break-link approach was proposed as an alternative risk assessment paradigm to reveal the effect of information communication failures between agents and tasks on the entire STS. A case study of the training of a Royal Navy crew detecting a low flying Hawk (simulating a sea-skimming missile) is presented using EAST to model the Hawk-Frigate STS in terms of social, information and task networks. By breaking 19 social links and 12 task links, 137 potential risks were identified. Discoveries included revealing the effect of risk moving around the system; reducing the risks to the Hawk increased the risks to the Frigate. Future research should examine the effects of compounded information communication failures on STS performance. Practitioner Summary: The paper presents a step-by-step walk-through of EAST to show how it can be used for risk assessment in sociotechnical systems. The 'broken-links' method takes a systemic, rather than taxonomic, approach to identify information communication failures in social and task networks.
Enhanced Pedestrian Navigation Based on Course Angle Error Estimation Using Cascaded Kalman Filters
Park, Chan Gook
2018-01-01
An enhanced pedestrian dead reckoning (PDR) based navigation algorithm, which uses two cascaded Kalman filters (TCKF) for the estimation of course angle and navigation errors, is proposed. The proposed algorithm uses a foot-mounted inertial measurement unit (IMU), waist-mounted magnetic sensors, and a zero velocity update (ZUPT) based inertial navigation technique with TCKF. The first stage filter estimates the course angle error of a human, which is closely related to the heading error of the IMU. In order to obtain the course measurements, the filter uses magnetic sensors and a position-trace based course angle. For preventing magnetic disturbance from contaminating the estimation, the magnetic sensors are attached to the waistband. Because the course angle error is mainly due to the heading error of the IMU, and the characteristic error of the heading angle is highly dependent on that of the course angle, the estimated course angle error is used as a measurement for estimating the heading error in the second stage filter. At the second stage, an inertial navigation system-extended Kalman filter-ZUPT (INS-EKF-ZUPT) method is adopted. As the heading error is estimated directly by using course-angle error measurements, the estimation accuracy for the heading and yaw gyro bias can be enhanced, compared with the ZUPT-only case, which eventually enhances the position accuracy more efficiently. The performance enhancements are verified via experiments, and the way-point position error for the proposed method is compared with those for the ZUPT-only case and with other cases that use ZUPT and various types of magnetic heading measurements. The results show that the position errors are reduced by a maximum of 90% compared with the conventional ZUPT based PDR algorithms. PMID:29690539
Dilution space ratio of 2H and 18O of doubly labeled water method in humans.
Sagayama, Hiroyuki; Yamada, Yosuke; Racine, Natalie M; Shriver, Timothy C; Schoeller, Dale A
2016-06-01
Variation of the dilution space ratio (Nd/No) between deuterium ((2)H) and oxygen-18 ((18)O) impacts the calculation of total energy expenditure (TEE) by doubly labeled water (DLW). Our aim was to examine the physiological and methodological sources of variation of Nd/No in humans. We analyzed data from 2,297 humans (0.25-89 yr old). This included the variables Nd/No, total body water, TEE, body mass index (BMI), and percent body fat (%fat). To differentiate between physiologic and methodologic sources of variation, the urine samples from 54 subjects were divided and blinded and analyzed separately, and repeated DLW dosing was performed in an additional 55 participants after 6 mo. Sex, BMI, and %fat did not significantly affect Nd/No, for which the interindividual SD was 0.017. The measurement error from the duplicate urine sample sets was 0.010, and intraindividual SD of Nd/No in repeats experiments was 0.013. An additional SD of 0.008 was contributed by calibration of the DLW dose water. The variation of measured Nd/No in humans was distributed within a small range and measurement error accounted for 68% of this variation. There was no evidence that Nd/No differed with respect to sex, BMI, and age between 1 and 80 yr, and thus use of a constant value is suggested to minimize the effect of stable isotope analysis error on calculation of TEE in the DLW studies in humans. Based on a review of 103 publications, the average dilution space ratio is 1.036 for individuals between 1 and 80 yr of age. Copyright © 2016 the American Physiological Society.
Qiao-Grider, Ying; Hung, Li-Fang; Kee, Chea-Su; Ramamirtham, Ramkumar; Smith, Earl L
2010-08-23
We analyzed the contribution of individual ocular components to vision-induced ametropias in 210 rhesus monkeys. The primary contribution to refractive-error development came from vitreous chamber depth; a minor contribution from corneal power was also detected. However, there was no systematic relationship between refractive error and anterior chamber depth or between refractive error and any crystalline lens parameter. Our results are in good agreement with previous studies in humans, suggesting that the refractive errors commonly observed in humans are created by vision-dependent mechanisms that are similar to those operating in monkeys. This concordance emphasizes the applicability of rhesus monkeys in refractive-error studies. Copyright 2010 Elsevier Ltd. All rights reserved.
Qiao-Grider, Ying; Hung, Li-Fang; Kee, Chea-su; Ramamirtham, Ramkumar; Smith, Earl L.
2010-01-01
We analyzed the contribution of individual ocular components to vision-induced ametropias in 210 rhesus monkeys. The primary contribution to refractive-error development came from vitreous chamber depth; a minor contribution from corneal power was also detected. However, there was no systematic relationship between refractive error and anterior chamber depth or between refractive error and any crystalline lens parameter. Our results are in good agreement with previous studies in humans, suggesting that the refractive errors commonly observed in humans are created by vision-dependent mechanisms that are similar to those operating in monkeys. This concordance emphasizes the applicability of rhesus monkeys in refractive-error studies. PMID:20600237
Akce, Abdullah; Johnson, Miles; Dantsker, Or; Bretl, Timothy
2013-03-01
This paper presents an interface for navigating a mobile robot that moves at a fixed speed in a planar workspace, with noisy binary inputs that are obtained asynchronously at low bit-rates from a human user through an electroencephalograph (EEG). The approach is to construct an ordered symbolic language for smooth planar curves and to use these curves as desired paths for a mobile robot. The underlying problem is then to design a communication protocol by which the user can, with vanishing error probability, specify a string in this language using a sequence of inputs. Such a protocol, provided by tools from information theory, relies on a human user's ability to compare smooth curves, just like they can compare strings of text. We demonstrate our interface by performing experiments in which twenty subjects fly a simulated aircraft at a fixed speed and altitude with input only from EEG. Experimental results show that the majority of subjects are able to specify desired paths despite a wide range of errors made in decoding EEG signals.
Evaluation of B1 inhomogeneity effect on DCE-MRI data analysis of brain tumor patients at 3T.
Sengupta, Anirban; Gupta, Rakesh Kumar; Singh, Anup
2017-12-02
Dynamic-contrast-enhanced (DCE) MRI data acquired using gradient echo based sequences is affected by errors in flip angle (FA) due to transmit B 1 inhomogeneity (B 1 inh). The purpose of the study was to evaluate the effect of B 1 inh on quantitative analysis of DCE-MRI data of human brain tumor patients and to evaluate the clinical significance of B 1 inh correction of perfusion parameters (PPs) on tumor grading. An MRI study was conducted on 35 glioma patients at 3T. The patients had histologically confirmed glioma with 23 high-grade (HG) and 12 low-grade (LG). Data for B 1 -mapping, T 1 -mapping and DCE-MRI were acquired. Relative B 1 maps (B 1rel ) were generated using the saturated-double-angle method. T 1 -maps were computed using the variable flip-angle method. Post-processing was performed for conversion of signal-intensity time (S(t)) curve to concentration-time (C(t)) curve followed by tracer kinetic analysis (K trans , Ve, Vp, Kep) and first pass analysis (CBV, CBF) using the general tracer-kinetic model. DCE-MRI data was analyzed without and with B 1 inh correction and errors in PPs were computed. Receiver-operating-characteristic (ROC) analysis was performed on HG and LG patients. Simulations were carried out to understand the effect of B 1 inhomogeneity on DCE-MRI data analysis in a systematic way. S(t) curves mimicking those in tumor tissue, were generated and FA errors were introduced followed by error analysis of PPs. Dependence of FA-based errors on the concentration of contrast agent and on the duration of DCE-MRI data was also studied. Simulations were also done to obtain K trans of glioma patients at different B 1rel values and see whether grading is affected or not. Current study shows that B 1rel value higher than nominal results in an overestimation of C(t) curves as well as derived PPs and vice versa. Moreover, at same B 1rel values, errors were large for larger values of C(t). Simulation results showed that grade of patients can change because of B 1 inh. B 1 inh in the human brain at 3T-MRI can introduce substantial errors in PPs derived from DCE-MRI data that might affect the accuracy of tumor grading, particularly for border zone cases. These errors can be mitigated using B 1 inh correction during DCE-MRI data analysis.
Saidi, Maryam; Towhidkhah, Farzad; Gharibzadeh, Shahriar; Lari, Abdolaziz Azizi
2013-12-01
Humans perceive the surrounding world by integration of information through different sensory modalities. Earlier models of multisensory integration rely mainly on traditional Bayesian and causal Bayesian inferences for single causal (source) and two causal (for two senses such as visual and auditory systems), respectively. In this paper a new recurrent neural model is presented for integration of visual and proprioceptive information. This model is based on population coding which is able to mimic multisensory integration of neural centers in the human brain. The simulation results agree with those achieved by casual Bayesian inference. The model can also simulate the sensory training process of visual and proprioceptive information in human. Training process in multisensory integration is a point with less attention in the literature before. The effect of proprioceptive training on multisensory perception was investigated through a set of experiments in our previous study. The current study, evaluates the effect of both modalities, i.e., visual and proprioceptive training and compares them with each other through a set of new experiments. In these experiments, the subject was asked to move his/her hand in a circle and estimate its position. The experiments were performed on eight subjects with proprioception training and eight subjects with visual training. Results of the experiments show three important points: (1) visual learning rate is significantly more than that of proprioception; (2) means of visual and proprioceptive errors are decreased by training but statistical analysis shows that this decrement is significant for proprioceptive error and non-significant for visual error, and (3) visual errors in training phase even in the beginning of it, is much less than errors of the main test stage because in the main test, the subject has to focus on two senses. The results of the experiments in this paper is in agreement with the results of the neural model simulation.
Yoshikawa, Munemitsu; Yamashiro, Kenji; Miyake, Masahiro; Oishi, Maho; Akagi-Kurashige, Yumiko; Kumagai, Kyoko; Nakata, Isao; Nakanishi, Hideo; Oishi, Akio; Gotoh, Norimoto; Yamada, Ryo; Matsuda, Fumihiko; Yoshimura, Nagahisa
2014-10-21
We investigated the association between refractive error in a Japanese population and myopia-related genes identified in two recent large-scale genome-wide association studies. Single-nucleotide polymorphisms (SNPs) in 51 genes that were reported by the Consortium for Refractive Error and Myopia and/or the 23andMe database were genotyped in 3712 healthy Japanese volunteers from the Nagahama Study using HumanHap610K Quad, HumanOmni2.5M, and/or HumanExome Arrays. To evaluate the association between refractive error and recently identified myopia-related genes, we used three approaches to perform quantitative trait locus analyses of mean refractive error in both eyes of the participants: per-SNP, gene-based top-SNP, and gene-based all-SNP analyses. Association plots of successfully replicated genes also were investigated. In our per-SNP analysis, eight myopia gene associations were replicated successfully: GJD2, RASGRF1, BICC1, KCNQ5, CD55, CYP26A1, LRRC4C, and B4GALNT2.Seven additional gene associations were replicated in our gene-based analyses: GRIA4, BMP2, QKI, BMP4, SFRP1, SH3GL2, and EHBP1L1. The signal strength of the reported SNPs and their tagging SNPs increased after considering different linkage disequilibrium patterns across ethnicities. Although two previous studies suggested strong associations between PRSS56, LAMA2, TOX, and RDH5 and myopia, we could not replicate these results. Our results confirmed the significance of the myopia-related genes reported previously and suggested that gene-based replication analyses are more effective than per-SNP analyses. Our comparison with two previous studies suggested that BMP3 SNPs cause myopia primarily in Caucasian populations, while they may exhibit protective effects in Asian populations. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.
Kalman filter estimation of human pilot-model parameters
NASA Technical Reports Server (NTRS)
Schiess, J. R.; Roland, V. R.
1975-01-01
The parameters of a human pilot-model transfer function are estimated by applying the extended Kalman filter to the corresponding retarded differential-difference equations in the time domain. Use of computer-generated data indicates that most of the parameters, including the implicit time delay, may be reasonably estimated in this way. When applied to two sets of experimental data obtained from a closed-loop tracking task performed by a human, the Kalman filter generated diverging residuals for one of the measurement types, apparently because of model assumption errors. Application of a modified adaptive technique was found to overcome the divergence and to produce reasonable estimates of most of the parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klinestiver, L.R.
Psychophysiological factors are not uncommon terms in the aviation incident/accident investigation sequence where human error is involved. It is highly suspect that the same psychophysiological factors may also exist in the industrial arena where operator personnel function; but, there is little evidence in literature indicating how management and subordinates cope with these factors to prevent or reduce accidents. It is apparent that human factors psychophysological training is quite evident in the aviation industry. However, while the industrial arena appears to analyze psychophysiological factors in accident investigations, there is little evidence that established training programs exist for supervisors and operator personnel.
A Unified Model of Performance: Validation of its Predictions across Different Sleep/Wake Schedules
Ramakrishnan, Sridhar; Wesensten, Nancy J.; Balkin, Thomas J.; Reifman, Jaques
2016-01-01
Study Objectives: Historically, mathematical models of human neurobehavioral performance developed on data from one sleep study were limited to predicting performance in similar studies, restricting their practical utility. We recently developed a unified model of performance (UMP) to predict the effects of the continuum of sleep loss—from chronic sleep restriction (CSR) to total sleep deprivation (TSD) challenges—and validated it using data from two studies of one laboratory. Here, we significantly extended this effort by validating the UMP predictions across a wide range of sleep/wake schedules from different studies and laboratories. Methods: We developed the UMP on psychomotor vigilance task (PVT) lapse data from one study encompassing four different CSR conditions (7 d of 3, 5, 7, and 9 h of sleep/night), and predicted performance in five other studies (from four laboratories), including different combinations of TSD (40 to 88 h), CSR (2 to 6 h of sleep/night), control (8 to 10 h of sleep/night), and nap (nocturnal and diurnal) schedules. Results: The UMP accurately predicted PVT performance trends across 14 different sleep/wake conditions, yielding average prediction errors between 7% and 36%, with the predictions lying within 2 standard errors of the measured data 87% of the time. In addition, the UMP accurately predicted performance impairment (average error of 15%) for schedules (TSD and naps) not used in model development. Conclusions: The unified model of performance can be used as a tool to help design sleep/wake schedules to optimize the extent and duration of neurobehavioral performance and to accelerate recovery after sleep loss. Citation: Ramakrishnan S, Wesensten NJ, Balkin TJ, Reifman J. A unified model of performance: validation of its predictions across different sleep/wake schedules. SLEEP 2016;39(1):249–262. PMID:26518594
Basics for sensorimotor information processing: some implications for learning
Vidal, Franck; Meckler, Cédric; Hasbroucq, Thierry
2015-01-01
In sensorimotor activities, learning requires efficient information processing, whether in car driving, sport activities or human–machine interactions. Several factors may affect the efficiency of such processing: they may be extrinsic (i.e., task-related) or intrinsic (i.e., subjects-related). The effects of these factors are intimately related to the structure of human information processing. In the present article we will focus on some of them, which are poorly taken into account, even when minimizing errors or their consequences is an essential issue at stake. Among the extrinsic factors, we will discuss, first, the effects of the quantity and quality of information, secondly, the effects of instruction and thirdly motor program learning. Among the intrinsic factors, we will discuss first the influence of prior information, secondly how individual strategies affect performance and, thirdly, we will stress the fact that although the human brain is not structured to function errorless (which is not new) humans are able to detect their errors very quickly and (in most of the cases), fast enough to correct them before they result in an overt failure. Extrinsic and intrinsic factors are important to take into account for learning because (1) they strongly affect performance, either in terms of speed or accuracy, which facilitates or impairs learning, (2) the effect of certain extrinsic factors may be strongly modified by learning and (3) certain intrinsic factors might be exploited for learning strategies. PMID:25762944
Optimized Assistive Human-Robot Interaction Using Reinforcement Learning.
Modares, Hamidreza; Ranatunga, Isura; Lewis, Frank L; Popa, Dan O
2016-03-01
An intelligent human-robot interaction (HRI) system with adjustable robot behavior is presented. The proposed HRI system assists the human operator to perform a given task with minimum workload demands and optimizes the overall human-robot system performance. Motivated by human factor studies, the presented control structure consists of two control loops. First, a robot-specific neuro-adaptive controller is designed in the inner loop to make the unknown nonlinear robot behave like a prescribed robot impedance model as perceived by a human operator. In contrast to existing neural network and adaptive impedance-based control methods, no information of the task performance or the prescribed robot impedance model parameters is required in the inner loop. Then, a task-specific outer-loop controller is designed to find the optimal parameters of the prescribed robot impedance model to adjust the robot's dynamics to the operator skills and minimize the tracking error. The outer loop includes the human operator, the robot, and the task performance details. The problem of finding the optimal parameters of the prescribed robot impedance model is transformed into a linear quadratic regulator (LQR) problem which minimizes the human effort and optimizes the closed-loop behavior of the HRI system for a given task. To obviate the requirement of the knowledge of the human model, integral reinforcement learning is used to solve the given LQR problem. Simulation results on an x - y table and a robot arm, and experimental implementation results on a PR2 robot confirm the suitability of the proposed method.
Primary Motor Cortex Involvement in Initial Learning during Visuomotor Adaptation
ERIC Educational Resources Information Center
Riek, Stephan; Hinder, Mark R.; Carson, Richard G.
2012-01-01
Human motor behaviour is continually modified on the basis of errors between desired and actual movement outcomes. It is emerging that the role played by the primary motor cortex (M1) in this process is contingent upon a variety of factors, including the nature of the task being performed, and the stage of learning. Here we used repetitive TMS to…
Lee, Min Su; Ju, Hojin; Song, Jin Woo; Park, Chan Gook
2015-11-06
In this paper, we present a method for finding the enhanced heading and position of pedestrians by fusing the Zero velocity UPdaTe (ZUPT)-based pedestrian dead reckoning (PDR) and the kinematic constraints of the lower human body. ZUPT is a well known algorithm for PDR, and provides a sufficiently accurate position solution for short term periods, but it cannot guarantee a stable and reliable heading because it suffers from magnetic disturbance in determining heading angles, which degrades the overall position accuracy as time passes. The basic idea of the proposed algorithm is integrating the left and right foot positions obtained by ZUPTs with the heading and position information from an IMU mounted on the waist. To integrate this information, a kinematic model of the lower human body, which is calculated by using orientation sensors mounted on both thighs and calves, is adopted. We note that the position of the left and right feet cannot be apart because of the kinematic constraints of the body, so the kinematic model generates new measurements for the waist position. The Extended Kalman Filter (EKF) on the waist data that estimates and corrects error states uses these measurements and magnetic heading measurements, which enhances the heading accuracy. The updated position information is fed into the foot mounted sensors, and reupdate processes are performed to correct the position error of each foot. The proposed update-reupdate technique consequently ensures improved observability of error states and position accuracy. Moreover, the proposed method provides all the information about the lower human body, so that it can be applied more effectively to motion tracking. The effectiveness of the proposed algorithm is verified via experimental results, which show that a 1.25% Return Position Error (RPE) with respect to walking distance is achieved.
An optomechanical model eye for ophthalmological refractive studies.
Arianpour, Ashkan; Tremblay, Eric J; Stamenov, Igor; Ford, Joseph E; Schanzlin, David J; Lo, Yuhwa
2013-02-01
To create an accurate, low-cost optomechanical model eye for investigation of refractive errors in clinical and basic research studies. An optomechanical fluid-filled eye model with dimensions consistent with the human eye was designed and fabricated. Optical simulations were performed on the optomechanical eye model, and the quantified resolution and refractive errors were compared with the widely used Navarro eye model using the ray-tracing software ZEMAX (Radiant Zemax, Redmond, WA). The resolution of the physical optomechanical eye model was then quantified with a complementary metal-oxide semiconductor imager using the image resolution software SFR Plus (Imatest, Boulder, CO). Refractive, manufacturing, and assembling errors were also assessed. A refractive intraocular lens (IOL) and a diffractive IOL were added to the optomechanical eye model for tests and analyses of a 1951 U.S. Air Force target chart. Resolution and aberrations of the optomechanical eye model and the Navarro eye model were qualitatively similar in ZEMAX simulations. Experimental testing found that the optomechanical eye model reproduced properties pertinent to human eyes, including resolution better than 20/20 visual acuity and a decrease in resolution as the field of view increased in size. The IOLs were also integrated into the optomechanical eye model to image objects at distances of 15, 10, and 3 feet, and they indicated a resolution of 22.8 cycles per degree at 15 feet. A life-sized optomechanical eye model with the flexibility to be patient-specific was designed and constructed. The model had the resolution of a healthy human eye and recreated normal refractive errors. This model may be useful in the evaluation of IOLs for cataract surgery. Copyright 2013, SLACK Incorporated.
Subiaul, Francys; Romansky, Kathryn; Cantlon, Jessica F; Klein, Tovah; Terrace, Herbert
2007-10-01
Here we compare the performance of 2-year-old human children with that of adult rhesus macaques on a cognitive imitation task. The task was to respond, in a particular order, to arbitrary sets of photographs that were presented simultaneously on a touch sensitive video monitor. Because the spatial position of list items was varied from trial to trial, subjects could not learn this task as a series of specific motor responses. On some lists, subjects with no knowledge of the ordinal position of the items were given the opportunity to learn the order of those items by observing an expert model. Children, like monkeys, learned new lists more rapidly in a social condition where they had the opportunity to observe an experienced model perform the list in question, than under a baseline condition in which they had to learn new lists entirely by trial and error. No differences were observed between the accuracy of each species' responses to individual items or in the frequencies with which they made different types of errors. These results provide clear evidence that monkeys and humans share the ability to imitate novel cognitive rules (cognitive imitation).
Quantum Error Correction: Optimal, Robust, or Adaptive? Or, Where is The Quantum Flyball Governor?
NASA Astrophysics Data System (ADS)
Kosut, Robert; Grace, Matthew
2012-02-01
In The Human Use of Human Beings: Cybernetics and Society (1950), Norbert Wiener introduces feedback control in this way: ``This control of a machine on the basis of its actual performance rather than its expected performance is known as feedback ... It is the function of control ... to produce a temporary and local reversal of the normal direction of entropy.'' The classic classroom example of feedback control is the all-mechanical flyball governor used by James Watt in the 18th century to regulate the speed of rotating steam engines. What is it that is so compelling about this apparatus? First, it is easy to understand how it regulates the speed of a rotating steam engine. Secondly, and perhaps more importantly, it is a part of the device itself. A naive observer would not distinguish this mechanical piece from all the rest. So it is natural to ask, where is the all-quantum device which is self regulating, ie, the Quantum Flyball Governor? Is the goal of quantum error correction (QEC) to design such a device? Devloping the computational and mathematical tools to design this device is the topic of this talk.
Predictive models of safety based on audit findings: Part 2: Measurement of model validity.
Hsiao, Yu-Lin; Drury, Colin; Wu, Changxu; Paquet, Victor
2013-07-01
Part 1 of this study sequence developed a human factors/ergonomics (HF/E) based classification system (termed HFACS-MA) for safety audit findings and proved its measurement reliability. In Part 2, we used the human error categories of HFACS-MA as predictors of future safety performance. Audit records and monthly safety incident reports from two airlines submitted to their regulatory authority were available for analysis, covering over 6.5 years. Two participants derived consensus results of HF/E errors from the audit reports using HFACS-MA. We adopted Neural Network and Poisson regression methods to establish nonlinear and linear prediction models respectively. These models were tested for the validity of prediction of the safety data, and only Neural Network method resulted in substantially significant predictive ability for each airline. Alternative predictions from counting of audit findings and from time sequence of safety data produced some significant results, but of much smaller magnitude than HFACS-MA. The use of HF/E analysis of audit findings provided proactive predictors of future safety performance in the aviation maintenance field. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Understanding adverse events: human factors.
Reason, J
1995-01-01
(1) Human rather than technical failures now represent the greatest threat to complex and potentially hazardous systems. This includes healthcare systems. (2) Managing the human risks will never be 100% effective. Human fallibility can be moderated, but it cannot be eliminated. (3) Different error types have different underlying mechanisms, occur in different parts of the organisation, and require different methods of risk management. The basic distinctions are between: Slips, lapses, trips, and fumbles (execution failures) and mistakes (planning or problem solving failures). Mistakes are divided into rule based mistakes and knowledge based mistakes. Errors (information-handling problems) and violations (motivational problems) Active versus latent failures. Active failures are committed by those in direct contact with the patient, latent failures arise in organisational and managerial spheres and their adverse effects may take a long time to become evident. (4) Safety significant errors occur at all levels of the system, not just at the sharp end. Decisions made in the upper echelons of the organisation create the conditions in the workplace that subsequently promote individual errors and violations. Latent failures are present long before an accident and are hence prime candidates for principled risk management. (5) Measures that involve sanctions and exhortations (that is, moralistic measures directed to those at the sharp end) have only very limited effectiveness, especially so in the case of highly trained professionals. (6) Human factors problems are a product of a chain of causes in which the individual psychological factors (that is, momentary inattention, forgetting, etc) are the last and least manageable links. Attentional "capture" (preoccupation or distraction) is a necessary condition for the commission of slips and lapses. Yet, its occurrence is almost impossible to predict or control effectively. The same is true of the factors associated with forgetting. States of mind contributing to error are thus extremely difficult to manage; they can happen to the best of people at any time. (7) People do not act in isolation. Their behaviour is shaped by circumstances. The same is true for errors and violations. The likelihood of an unsafe act being committed is heavily influenced by the nature of the task and by the local workplace conditions. These, in turn, are the product of "upstream" organisational factors. Great gains in safety can ve achieved through relatively small modifications of equipment and workplaces. (8) Automation and increasing advanced equipment do not cure human factors problems, they merely relocate them. In contrast, training people to work effectively in teams costs little, but has achieved significant enhancements of human performance in aviation. (9) Effective risk management depends critically on a confidential and preferable anonymous incident monitoring system that records the individual, task, situational, and organisational factors associated with incidents and near misses. (10) Effective risk management means the simultaneous and targeted deployment of limited remedial resources at different levels of the system: the individual or team, the task, the situation, and the organisation as a whole. PMID:10151618
Performance factors of mobile rich media job aids for community health workers
Florez-Arango, Jose F; Dunn, Kim; Zhang, Jiajie
2011-01-01
Objective To study and analyze the possible benefits on performance of community health workers using point-of-care clinical guidelines implemented as interactive rich media job aids on small-format mobile platforms. Design A crossover study with one intervention (rich media job aids) and one control (traditional job aids), two periods, with 50 community health workers, each subject solving a total 15 standardized cases per period per period (30 cases in total per subject). Measurements Error rate per case and task, protocol compliance. Results A total of 1394 cases were evaluated. Intervention reduces errors by an average of 33.15% (p=0.001) and increases protocol compliance 30.18% (p<0.001). Limitations Medical cases were presented on human patient simulators in a laboratory setting, not on real patients. Conclusion These results indicate encouraging prospects for mHealth technologies in general, and the use of rich media clinical guidelines on cell phones in particular, for the improvement of community health worker performance in developing countries. PMID:21292702
Performance factors of mobile rich media job aids for community health workers.
Florez-Arango, Jose F; Iyengar, M Sriram; Dunn, Kim; Zhang, Jiajie
2011-01-01
To study and analyze the possible benefits on performance of community health workers using point-of-care clinical guidelines implemented as interactive rich media job aids on small-format mobile platforms. A crossover study with one intervention (rich media job aids) and one control (traditional job aids), two periods, with 50 community health workers, each subject solving a total 15 standardized cases per period per period (30 cases in total per subject). Error rate per case and task, protocol compliance. A total of 1394 cases were evaluated. Intervention reduces errors by an average of 33.15% (p = 0.001) and increases protocol compliance 30.18% (p < 0.001). Limitations Medical cases were presented on human patient simulators in a laboratory setting, not on real patients. These results indicate encouraging prospects for mHealth technologies in general, and the use of rich media clinical guidelines on cell phones in particular, for the improvement of community health worker performance in developing countries.
Skaugset, L Melissa; Farrell, Susan; Carney, Michele; Wolff, Margaret; Santen, Sally A; Perry, Marcia; Cico, Stephen John
2016-08-01
Emergency physicians work in a fast-paced environment that is characterized by frequent interruptions and the expectation that they will perform multiple tasks efficiently and without error while maintaining oversight of the entire emergency department. However, there is a lack of definition and understanding of the behaviors that constitute effective task switching and multitasking, as well as how to improve these skills. This article reviews the literature on task switching and multitasking in a variety of disciplines-including cognitive science, human factors engineering, business, and medicine-to define and describe the successful performance of task switching and multitasking in emergency medicine. Multitasking, defined as the performance of two tasks simultaneously, is not possible except when behaviors become completely automatic; instead, physicians rapidly switch between small tasks. This task switching causes disruption in the primary task and may contribute to error. A framework is described to enhance the understanding and practice of these behaviors. Copyright © 2015 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.
Rennie, Waverly; Phetsouvanh, Rattanaxay; Lupisan, Socorro; Vanisaveth, Viengsay; Hongvanthong, Bouasy; Phompida, Samlane; Alday, Portia; Fulache, Mila; Lumagui, Richard; Jorgensen, Pernille; Bell, David; Harvey, Steven
2007-01-01
The usefulness of rapid diagnostic tests (RDT) in malaria case management depends on the accuracy of the diagnoses they provide. Despite their apparent simplicity, previous studies indicate that RDT accuracy is highly user-dependent. As malaria RDTs will frequently be used in remote areas with little supervision or support, minimising mistakes is crucial. This paper describes the development of new instructions (job aids) to improve health worker performance, based on observations of common errors made by remote health workers and villagers in preparing and interpreting RDTs, in the Philippines and Laos. Initial preparation using the instructions provided by the manufacturer was poor, but improved significantly with the job aids (e.g. correct use both of the dipstick and cassette increased in the Philippines by 17%). However, mistakes in preparation remained commonplace, especially for dipstick RDTs, as did mistakes in interpretation of results. A short orientation on correct use and interpretation further improved accuracy, from 70% to 80%. The results indicate that apparently simple diagnostic tests can be poorly performed and interpreted, but provision of clear, simple instructions can reduce these errors. Preparation of appropriate instructions and training as well as monitoring of user behaviour are an essential part of rapid test implementation.
Human Factors Considerations for Area Navigation Departure and Arrival Procedures
NASA Technical Reports Server (NTRS)
Barhydt, Richard; Adams, Catherine A.
2006-01-01
Area navigation (RNAV) procedures are being implemented in the United States and around the world as part of a transition to a performance-based navigation system. These procedures are providing significant benefits and have also caused some human factors issues to emerge. Under sponsorship from the Federal Aviation Administration (FAA), the National Aeronautics and Space Administration (NASA) has undertaken a project to document RNAV-related human factors issues and propose areas for further consideration. The component focusing on RNAV Departure and Arrival Procedures involved discussions with expert users, a literature review, and a focused review of the NASA Aviation Safety Reporting System (ASRS) database. Issues were found to include aspects of air traffic control and airline procedures, aircraft systems, and procedure design. Major findings suggest the need for specific instrument procedure design guidelines that consider the effects of human performance. Ongoing industry and government activities to address air-ground communication terminology, design improvements, and chart-database commonality are strongly encouraged. A review of factors contributing to RNAV in-service errors would likely lead to improved system design and operational performance.
Simultaneous Control of Error Rates in fMRI Data Analysis
Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David
2015-01-01
The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the Likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to ‘cleaner’ looking brain maps and operationally superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain. PMID:26272730
Westendorff, Stephanie; Kuang, Shenbing; Taghizadeh, Bahareh; Donchin, Opher; Gail, Alexander
2015-04-01
Different error signals can induce sensorimotor adaptation during visually guided reaching, possibly evoking different neural adaptation mechanisms. Here we investigate reach adaptation induced by visual target errors without perturbing the actual or sensed hand position. We analyzed the spatial generalization of adaptation to target error to compare it with other known generalization patterns and simulated our results with a neural network model trained to minimize target error independent of prediction errors. Subjects reached to different peripheral visual targets and had to adapt to a sudden fixed-amplitude displacement ("jump") consistently occurring for only one of the reach targets. Subjects simultaneously had to perform contralateral unperturbed saccades, which rendered the reach target jump unnoticeable. As a result, subjects adapted by gradually decreasing reach errors and showed negative aftereffects for the perturbed reach target. Reach errors generalized to unperturbed targets according to a translational rather than rotational generalization pattern, but locally, not globally. More importantly, reach errors generalized asymmetrically with a skewed generalization function in the direction of the target jump. Our neural network model reproduced the skewed generalization after adaptation to target jump without having been explicitly trained to produce a specific generalization pattern. Our combined psychophysical and simulation results suggest that target jump adaptation in reaching can be explained by gradual updating of spatial motor goal representations in sensorimotor association networks, independent of learning induced by a prediction-error about the hand position. The simulations make testable predictions about the underlying changes in the tuning of sensorimotor neurons during target jump adaptation. Copyright © 2015 the American Physiological Society.
Shadmehr, Reza; Ohminami, Shinya; Tsutsumi, Ryosuke; Shirota, Yuichiro; Shimizu, Takahiro; Tanaka, Nobuyuki; Terao, Yasuo; Tsuji, Shoji; Ugawa, Yoshikazu; Uchimura, Motoaki; Inoue, Masato; Kitazawa, Shigeru
2015-01-01
Cerebellar damage can profoundly impair human motor adaptation. For example, if reaching movements are perturbed abruptly, cerebellar damage impairs the ability to learn from the perturbation-induced errors. Interestingly, if the perturbation is imposed gradually over many trials, people with cerebellar damage may exhibit improved adaptation. However, this result is controversial, since the differential effects of gradual vs. abrupt protocols have not been observed in all studies. To examine this question, we recruited patients with pure cerebellar ataxia due to cerebellar cortical atrophy (n = 13) and asked them to reach to a target while viewing the scene through wedge prisms. The prisms were computer controlled, making it possible to impose the full perturbation abruptly in one trial, or build up the perturbation gradually over many trials. To control visual feedback, we employed shutter glasses that removed visual feedback during the reach, allowing us to measure trial-by-trial learning from error (termed error-sensitivity), and trial-by-trial decay of motor memory (termed forgetting). We found that the patients benefited significantly from the gradual protocol, improving their performance with respect to the abrupt protocol by exhibiting smaller errors during the exposure block, and producing larger aftereffects during the postexposure block. Trial-by-trial analysis suggested that this improvement was due to increased error-sensitivity in the gradual protocol. Therefore, cerebellar patients exhibited an improved ability to learn from error if they experienced those errors gradually. This improvement coincided with increased error-sensitivity and was present in both groups of subjects, suggesting that control of error-sensitivity may be spared despite cerebellar damage. PMID:26311179
Westendorff, Stephanie; Kuang, Shenbing; Taghizadeh, Bahareh; Donchin, Opher
2015-01-01
Different error signals can induce sensorimotor adaptation during visually guided reaching, possibly evoking different neural adaptation mechanisms. Here we investigate reach adaptation induced by visual target errors without perturbing the actual or sensed hand position. We analyzed the spatial generalization of adaptation to target error to compare it with other known generalization patterns and simulated our results with a neural network model trained to minimize target error independent of prediction errors. Subjects reached to different peripheral visual targets and had to adapt to a sudden fixed-amplitude displacement (“jump”) consistently occurring for only one of the reach targets. Subjects simultaneously had to perform contralateral unperturbed saccades, which rendered the reach target jump unnoticeable. As a result, subjects adapted by gradually decreasing reach errors and showed negative aftereffects for the perturbed reach target. Reach errors generalized to unperturbed targets according to a translational rather than rotational generalization pattern, but locally, not globally. More importantly, reach errors generalized asymmetrically with a skewed generalization function in the direction of the target jump. Our neural network model reproduced the skewed generalization after adaptation to target jump without having been explicitly trained to produce a specific generalization pattern. Our combined psychophysical and simulation results suggest that target jump adaptation in reaching can be explained by gradual updating of spatial motor goal representations in sensorimotor association networks, independent of learning induced by a prediction-error about the hand position. The simulations make testable predictions about the underlying changes in the tuning of sensorimotor neurons during target jump adaptation. PMID:25609106
Local Use-Dependent Sleep in Wakefulness Links Performance Errors to Learning
Quercia, Angelica; Zappasodi, Filippo; Committeri, Giorgia; Ferrara, Michele
2018-01-01
Sleep and wakefulness are no longer to be considered as discrete states. During wakefulness brain regions can enter a sleep-like state (off-periods) in response to a prolonged period of activity (local use-dependent sleep). Similarly, during nonREM sleep the slow-wave activity, the hallmark of sleep plasticity, increases locally in brain regions previously involved in a learning task. Recent studies have demonstrated that behavioral performance may be impaired by off-periods in wake in task-related regions. However, the relation between off-periods in wake, related performance errors and learning is still untested in humans. Here, by employing high density electroencephalographic (hd-EEG) recordings, we investigated local use-dependent sleep in wake, asking participants to repeat continuously two intensive spatial navigation tasks. Critically, one task relied on previous map learning (Wayfinding) while the other did not (Control). Behaviorally awake participants, who were not sleep deprived, showed progressive increments of delta activity only during the learning-based spatial navigation task. As shown by source localization, delta activity was mainly localized in the left parietal and bilateral frontal cortices, all regions known to be engaged in spatial navigation tasks. Moreover, during the Wayfinding task, these increments of delta power were specifically associated with errors, whose probability of occurrence was significantly higher compared to the Control task. Unlike the Wayfinding task, during the Control task neither delta activity nor the number of errors increased progressively. Furthermore, during the Wayfinding task, both the number and the amplitude of individual delta waves, as indexes of neuronal silence in wake (off-periods), were significantly higher during errors than hits. Finally, a path analysis linked the use of the spatial navigation circuits undergone to learning plasticity to off periods in wake. In conclusion, local sleep regulation in wakefulness, associated with performance failures, could be functionally linked to learning-related cortical plasticity. PMID:29666574
Human error and the search for blame
NASA Technical Reports Server (NTRS)
Denning, Peter J.
1989-01-01
Human error is a frequent topic in discussions about risks in using computer systems. A rational analysis of human error leads through the consideration of mistakes to standards that designers use to avoid mistakes that lead to known breakdowns. The irrational side, however, is more interesting. It conditions people to think that breakdowns are inherently wrong and that there is ultimately someone who is responsible. This leads to a search for someone to blame which diverts attention from: learning from the mistakes; seeing the limitations of current engineering methodology; and improving the discourse of design.
Individual Differences in Human Reliability Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeffrey C. Joe; Ronald L. Boring
2014-06-01
While human reliability analysis (HRA) methods include uncertainty in quantification, the nominal model of human error in HRA typically assumes that operator performance does not vary significantly when they are given the same initiating event, indicators, procedures, and training, and that any differences in operator performance are simply aleatory (i.e., random). While this assumption generally holds true when performing routine actions, variability in operator response has been observed in multiple studies, especially in complex situations that go beyond training and procedures. As such, complexity can lead to differences in operator performance (e.g., operator understanding and decision-making). Furthermore, psychological research hasmore » shown that there are a number of known antecedents (i.e., attributable causes) that consistently contribute to observable and systematically measurable (i.e., not random) differences in behavior. This paper reviews examples of individual differences taken from operational experience and the psychological literature. The impact of these differences in human behavior and their implications for HRA are then discussed. We propose that individual differences should not be treated as aleatory, but rather as epistemic. Ultimately, by understanding the sources of individual differences, it is possible to remove some epistemic uncertainty from analyses.« less
Fritz, Jan; U-Thainual, Paweena; Ungi, Tamas; Flammang, Aaron J.; Fichtinger, Gabor; Iordachita, Iulian I.
2012-01-01
Purpose: To prospectively assess overlay technology in providing accurate and efficient targeting for magnetic resonance (MR) imaging–guided shoulder and hip joint arthrography. Materials and Methods: A prototype augmented reality image overlay system was used in conjunction with a clinical 1.5-T MR imager. A total of 24 shoulder joint and 24 hip joint injections were planned in 12 human cadavers. Two operators (A and B) participated, each performing procedures on different cadavers using image overlay guidance. MR imaging was used to confirm needle positions, monitor injections, and perform MR arthrography. Accuracy was assessed according to the rate of needle adjustment, target error, and whether the injection was intraarticular. Efficiency was assessed according to arthrography procedural time. Operator differences were assessed with comparison of accuracy and procedure times between the operators. Mann-Whitney U test and Fisher exact test were used to assess group differences. Results: Forty-five arthrography procedures (23 shoulders, 22 hips) were performed. Three joints had prostheses and were excluded. Operator A performed 12 shoulder and 12 hip injections. Operator B performed 11 shoulder and 10 hip injections. Needle adjustment rate was 13% (six of 45; one for operator A and five for operator B). Target error was 3.1 mm ± 1.2 (standard deviation) (operator A, 2.9 mm ± 1.4; operator B, 3.5 mm ± 0.9). Intraarticular injection rate was 100% (45 of 45). The average arthrography time was 14 minutes (range, 6–27 minutes; 12 minutes [range, 6–25 minutes] for operator A and 16 minutes [range, 6–27 min] for operator B). Operator differences were not significant with regard to needle adjustment rate (P = .08), target error (P = .07), intraarticular injection rate (P > .99), and arthrography time (P = .22). Conclusion: Image overlay technology provides accurate and efficient MR guidance for successful shoulder and hip arthrography in human cadavers. © RSNA, 2012 Supplemental material: http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.12112640/-/DC1 PMID:22843764
de Souza Baptista, Roberto; Bo, Antonio P L; Hayashibe, Mitsuhiro
2017-06-01
Performance assessment of human movement is critical in diagnosis and motor-control rehabilitation. Recent developments in portable sensor technology enable clinicians to measure spatiotemporal aspects to aid in the neurological assessment. However, the extraction of quantitative information from such measurements is usually done manually through visual inspection. This paper presents a novel framework for automatic human movement assessment that executes segmentation and motor performance parameter extraction in time-series of measurements from a sequence of human movements. We use the elements of a Switching Linear Dynamic System model as building blocks to translate formal definitions and procedures from human movement analysis. Our approach provides a method for users with no expertise in signal processing to create models for movements using labeled dataset and later use it for automatic assessment. We validated our framework on preliminary tests involving six healthy adult subjects that executed common movements in functional tests and rehabilitation exercise sessions, such as sit-to-stand and lateral elevation of the arms and five elderly subjects, two of which with limited mobility, that executed the sit-to-stand movement. The proposed method worked on random motion sequences for the dual purpose of movement segmentation (accuracy of 72%-100%) and motor performance assessment (mean error of 0%-12%).
snpAD: An ancient DNA genotype caller.
Prüfer, Kay
2018-06-21
The study of ancient genomes can elucidate the evolutionary past. However, analyses are complicated by base-modifications in ancient DNA molecules that result in errors in DNA sequences. These errors are particularly common near the ends of sequences and pose a challenge for genotype calling. I describe an iterative method that estimates genotype frequencies and errors along sequences to allow for accurate genotype calling from ancient sequences. The implementation of this method, called snpAD, performs well on high-coverage ancient data, as shown by simulations and by subsampling the data of a high-coverage Neandertal genome. Although estimates for low-coverage genomes are less accurate, I am able to derive approximate estimates of heterozygosity from several low-coverage Neandertals. These estimates show that low heterozygosity, compared to modern humans, was common among Neandertals. The C ++ code of snpAD is freely available at http://bioinf.eva.mpg.de/snpAD/. Supplementary data are available at Bioinformatics online.
Color extended visual cryptography using error diffusion.
Kang, InKoo; Arce, Gonzalo R; Lee, Heung-Kyu
2011-01-01
Color visual cryptography (VC) encrypts a color secret message into n color halftone image shares. Previous methods in the literature show good results for black and white or gray scale VC schemes, however, they are not sufficient to be applied directly to color shares due to different color structures. Some methods for color visual cryptography are not satisfactory in terms of producing either meaningless shares or meaningful shares with low visual quality, leading to suspicion of encryption. This paper introduces the concept of visual information pixel (VIP) synchronization and error diffusion to attain a color visual cryptography encryption method that produces meaningful color shares with high visual quality. VIP synchronization retains the positions of pixels carrying visual information of original images throughout the color channels and error diffusion generates shares pleasant to human eyes. Comparisons with previous approaches show the superior performance of the new method.
Skeletal and body composition evaluation
NASA Technical Reports Server (NTRS)
Mazess, R. B.
1983-01-01
Research on radiation detectors for absorptiometry; analysis of errors affective single photon absorptiometry and development of instrumentation; analysis of errors affecting dual photon absorptiometry and development of instrumentation; comparison of skeletal measurements with other techniques; cooperation with NASA projects for skeletal evaluation in spaceflight (Experiment MO-78) and in laboratory studies with immobilized animals; studies of postmenopausal osteoporosis; organization of scientific meetings and workshops on absorptiometric measurement; and development of instrumentation for measurement of fluid shifts in the human body were performed. Instrumentation was developed that allows accurate and precise (2% error) measurements of mineral content in compact and trabecular bone and of the total skeleton. Instrumentation was also developed to measure fluid shifts in the extremities. Radiation exposure with those procedures is low (2-10 MREM). One hundred seventy three technical reports and one hundred and four published papers of studies from the University of Wisconsin Bone Mineral Lab are listed.
Optical character recognition: an illustrated guide to the frontier
NASA Astrophysics Data System (ADS)
Nagy, George; Nartker, Thomas A.; Rice, Stephen V.
1999-12-01
We offer a perspective on the performance of current OCR systems by illustrating and explaining actual OCR errors made by three commercial devices. After discussing briefly the character recognition abilities of humans and computers, we present illustrated examples of recognition errors. The top level of our taxonomy of the causes of errors consists of Imaging Defects, Similar Symbols, Punctuation, and Typography. The analysis of a series of 'snippets' from this perspective provides insight into the strengths and weaknesses of current systems, and perhaps a road map to future progress. The examples were drawn from the large-scale tests conducted by the authors at the Information Science Research Institute of the University of Nevada, Las Vegas. By way of conclusion, we point to possible approaches for improving the accuracy of today's systems. The talk is based on our eponymous monograph, recently published in The Kluwer International Series in Engineering and Computer Science, Kluwer Academic Publishers, 1999.
Investigating the Link Between Radiologists Gaze, Diagnostic Decision, and Image Content
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tourassi, Georgia; Voisin, Sophie; Paquit, Vincent C
2013-01-01
Objective: To investigate machine learning for linking image content, human perception, cognition, and error in the diagnostic interpretation of mammograms. Methods: Gaze data and diagnostic decisions were collected from six radiologists who reviewed 20 screening mammograms while wearing a head-mounted eye-tracker. Texture analysis was performed in mammographic regions that attracted radiologists attention and in all abnormal regions. Machine learning algorithms were investigated to develop predictive models that link: (i) image content with gaze, (ii) image content and gaze with cognition, and (iii) image content, gaze, and cognition with diagnostic error. Both group-based and individualized models were explored. Results: By poolingmore » the data from all radiologists machine learning produced highly accurate predictive models linking image content, gaze, cognition, and error. Merging radiologists gaze metrics and cognitive opinions with computer-extracted image features identified 59% of the radiologists diagnostic errors while confirming 96.2% of their correct diagnoses. The radiologists individual errors could be adequately predicted by modeling the behavior of their peers. However, personalized tuning appears to be beneficial in many cases to capture more accurately individual behavior. Conclusions: Machine learning algorithms combining image features with radiologists gaze data and diagnostic decisions can be effectively developed to recognize cognitive and perceptual errors associated with the diagnostic interpretation of mammograms.« less
Why do adult dogs (Canis familiaris) commit the A-not-B search error?
Sümegi, Zsófia; Kis, Anna; Miklósi, Ádám; Topál, József
2014-02-01
It has been recently reported that adult domestic dogs, like human infants, tend to commit perseverative search errors; that is, they select the previously rewarded empty location in Piagetian A-not-B search task because of the experimenter's ostensive communicative cues. There is, however, an ongoing debate over whether these findings reveal that dogs can use the human ostensive referential communication as a source of information or the phenomenon can be accounted for by "more simple" explanations like insufficient attention and learning based on local enhancement. In 2 experiments the authors systematically manipulated the type of human cueing (communicative or noncommunicative) adjacent to the A hiding place during both the A and B trials. Results highlight 3 important aspects of the dogs' A-not-B error: (a) search errors are influenced to a certain extent by dogs' motivation to retrieve the toy object; (b) human communicative and noncommunicative signals have different error-inducing effects; and (3) communicative signals presented at the A hiding place during the B trials but not during the A trials play a crucial role in inducing the A-not-B error and it can be induced even without demonstrating repeated hiding events at location A. These findings further confirm the notion that perseverative search error, at least partially, reflects a "ready-to-obey" attitude in the dog rather than insufficient attention and/or working memory.
Implicit Value Updating Explains Transitive Inference Performance: The Betasort Model
Jensen, Greg; Muñoz, Fabian; Alkan, Yelda; Ferrera, Vincent P.; Terrace, Herbert S.
2015-01-01
Transitive inference (the ability to infer that B > D given that B > C and C > D) is a widespread characteristic of serial learning, observed in dozens of species. Despite these robust behavioral effects, reinforcement learning models reliant on reward prediction error or associative strength routinely fail to perform these inferences. We propose an algorithm called betasort, inspired by cognitive processes, which performs transitive inference at low computational cost. This is accomplished by (1) representing stimulus positions along a unit span using beta distributions, (2) treating positive and negative feedback asymmetrically, and (3) updating the position of every stimulus during every trial, whether that stimulus was visible or not. Performance was compared for rhesus macaques, humans, and the betasort algorithm, as well as Q-learning, an established reward-prediction error (RPE) model. Of these, only Q-learning failed to respond above chance during critical test trials. Betasort’s success (when compared to RPE models) and its computational efficiency (when compared to full Markov decision process implementations) suggests that the study of reinforcement learning in organisms will be best served by a feature-driven approach to comparing formal models. PMID:26407227
Implicit Value Updating Explains Transitive Inference Performance: The Betasort Model.
Jensen, Greg; Muñoz, Fabian; Alkan, Yelda; Ferrera, Vincent P; Terrace, Herbert S
2015-01-01
Transitive inference (the ability to infer that B > D given that B > C and C > D) is a widespread characteristic of serial learning, observed in dozens of species. Despite these robust behavioral effects, reinforcement learning models reliant on reward prediction error or associative strength routinely fail to perform these inferences. We propose an algorithm called betasort, inspired by cognitive processes, which performs transitive inference at low computational cost. This is accomplished by (1) representing stimulus positions along a unit span using beta distributions, (2) treating positive and negative feedback asymmetrically, and (3) updating the position of every stimulus during every trial, whether that stimulus was visible or not. Performance was compared for rhesus macaques, humans, and the betasort algorithm, as well as Q-learning, an established reward-prediction error (RPE) model. Of these, only Q-learning failed to respond above chance during critical test trials. Betasort's success (when compared to RPE models) and its computational efficiency (when compared to full Markov decision process implementations) suggests that the study of reinforcement learning in organisms will be best served by a feature-driven approach to comparing formal models.
#2 - An Empirical Assessment of Exposure Measurement Error ...
Background• Differing degrees of exposure error acrosspollutants• Previous focus on quantifying and accounting forexposure error in single-pollutant models• Examine exposure errors for multiple pollutantsand provide insights on the potential for bias andattenuation of effect estimates in single and bipollutantepidemiological models The National Exposure Research Laboratory (NERL) Human Exposure and Atmospheric Sciences Division (HEASD) conducts research in support of EPA mission to protect human health and the environment. HEASD research program supports Goal 1 (Clean Air) and Goal 4 (Healthy People) of EPA strategic plan. More specifically, our division conducts research to characterize the movement of pollutants from the source to contact with humans. Our multidisciplinary research program produces Methods, Measurements, and Models to identify relationships between and characterize processes that link source emissions, environmental concentrations, human exposures, and target-tissue dose. The impact of these tools is improved regulatory programs and policies for EPA.
Real-time recognition of feedback error-related potentials during a time-estimation task.
Lopez-Larraz, Eduardo; Iturrate, Iñaki; Montesano, Luis; Minguez, Javier
2010-01-01
Feedback error-related potentials are a promising brain process in the field of rehabilitation since they are related to human learning. Due to the fact that many therapeutic strategies rely on the presentation of feedback stimuli, potentials generated by these stimuli could be used to ameliorate the patient's progress. In this paper we propose a method that can identify, in real-time, feedback evoked potentials in a time-estimation task. We have tested our system with five participants in two different days with a separation of three weeks between them, achieving a mean single-trial detection performance of 71.62% for real-time recognition, and 78.08% in offline classification. Additionally, an analysis of the stability of the signal between the two days is performed, suggesting that the feedback responses are stable enough to be used without the needing of training again the user.
Rotational wind indicator enhances control of rotated displays
NASA Technical Reports Server (NTRS)
Cunningham, H. A.; Pavel, Misha
1991-01-01
Rotation by 108 deg of the spatial mapping between a visual display and a manual input device produces large spatial errors in a discrete aiming task. These errors are not easily corrected by voluntary mental effort, but the central nervous system does adapt gradually to the new mapping. Bernotat (1970) showed that adding true hand position to a 90 deg rotated display improved performance of a compensatory tracking task, but tracking error rose again upon removal of the explicit cue. This suggests that the explicit error signal did not induce changes in the neural mapping, but rather allowed the operator to reduce tracking error using a higher mental strategy. In this report, we describe an explicit visual display enhancement applied to a 108 deg rotated discrete aiming task. A 'wind indicator' corresponding to the effect of the mapping rotation is displayed on the operator-controlled cursor. The human operator is instructed to oppose the virtual force represented by the indicator, as one would do if flying an airplane in a crosswind. This enhancement reduces spatial aiming error in the first 10 minutes of practice by an average of 70 percent when compared to a no enhancement control condition. Moreover, it produces adaptation aftereffect, which is evidence of learning by neural adaptation rather than by mental strategy. Finally, aiming error does not rise upon removal of the explicit cue.
Human factors engineering approaches to patient identification armband design.
Probst, C Adam; Wolf, Laurie; Bollini, Mara; Xiao, Yan
2016-01-01
The task of patient identification is performed many times each day by nurses and other members of the care team. Armbands are used for both direct verification and barcode scanning during patient identification. Armbands and information layout are critical to reducing patient identification errors and dangerous workarounds. We report the effort at two large, integrated healthcare systems that employed human factors engineering approaches to the information layout design of new patient identification armbands. The different methods used illustrate potential pathways to obtain standardized armbands across healthcare systems that incorporate human factors principles. By extension, how the designs have been adopted provides examples of how to incorporate human factors engineering into key clinical processes. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.
HOW DO RADIOLOGISTS USE THE HUMAN SEARCH ENGINE?
Wolfe, Jeremy M; Evans, Karla K; Drew, Trafton; Aizenman, Avigael; Josephs, Emilie
2016-06-01
Radiologists perform many 'visual search tasks' in which they look for one or more instances of one or more types of target item in a medical image (e.g. cancer screening). To understand and improve how radiologists do such tasks, it must be understood how the human 'search engine' works. This article briefly reviews some of the relevant work into this aspect of medical image perception. Questions include how attention and the eyes are guided in radiologic search? How is global (image-wide) information used in search? How might properties of human vision and human cognition lead to errors in radiologic search? © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
An improved silhouette for human pose estimation
NASA Astrophysics Data System (ADS)
Hawes, Anthony H.; Iftekharuddin, Khan M.
2017-08-01
We propose a novel method for analyzing images that exploits the natural lines of a human poses to find areas where self-occlusion could be present. Errors caused by self-occlusion cause several modern human pose estimation methods to mis-identify body parts, which reduces the performance of most action recognition algorithms. Our method is motivated by the observation that, in several cases, occlusion can be reasoned using only boundary lines of limbs. An intelligent edge detection algorithm based on the above principle could be used to augment the silhouette with information useful for pose estimation algorithms and push forward progress on occlusion handling for human action recognition. The algorithm described is applicable to computer vision scenarios involving 2D images and (appropriated flattened) 3D images.
Software platform for managing the classification of error- related potentials of observers
NASA Astrophysics Data System (ADS)
Asvestas, P.; Ventouras, E.-C.; Kostopoulos, S.; Sidiropoulos, K.; Korfiatis, V.; Korda, A.; Uzunolglu, A.; Karanasiou, I.; Kalatzis, I.; Matsopoulos, G.
2015-09-01
Human learning is partly based on observation. Electroencephalographic recordings of subjects who perform acts (actors) or observe actors (observers), contain a negative waveform in the Evoked Potentials (EPs) of the actors that commit errors and of observers who observe the error-committing actors. This waveform is called the Error-Related Negativity (ERN). Its detection has applications in the context of Brain-Computer Interfaces. The present work describes a software system developed for managing EPs of observers, with the aim of classifying them into observations of either correct or incorrect actions. It consists of an integrated platform for the storage, management, processing and classification of EPs recorded during error-observation experiments. The system was developed using C# and the following development tools and frameworks: MySQL, .NET Framework, Entity Framework and Emgu CV, for interfacing with the machine learning library of OpenCV. Up to six features can be computed per EP recording per electrode. The user can select among various feature selection algorithms and then proceed to train one of three types of classifiers: Artificial Neural Networks, Support Vector Machines, k-nearest neighbour. Next the classifier can be used for classifying any EP curve that has been inputted to the database.
Reyes, Jeanette M; Xu, Yadong; Vizuete, William; Serre, Marc L
2017-01-01
The regulatory Community Multiscale Air Quality (CMAQ) model is a means to understanding the sources, concentrations and regulatory attainment of air pollutants within a model's domain. Substantial resources are allocated to the evaluation of model performance. The Regionalized Air quality Model Performance (RAMP) method introduced here explores novel ways of visualizing and evaluating CMAQ model performance and errors for daily Particulate Matter ≤ 2.5 micrometers (PM2.5) concentrations across the continental United States. The RAMP method performs a non-homogenous, non-linear, non-homoscedastic model performance evaluation at each CMAQ grid. This work demonstrates that CMAQ model performance, for a well-documented 2001 regulatory episode, is non-homogeneous across space/time. The RAMP correction of systematic errors outperforms other model evaluation methods as demonstrated by a 22.1% reduction in Mean Square Error compared to a constant domain wide correction. The RAMP method is able to accurately reproduce simulated performance with a correlation of r = 76.1%. Most of the error coming from CMAQ is random error with only a minority of error being systematic. Areas of high systematic error are collocated with areas of high random error, implying both error types originate from similar sources. Therefore, addressing underlying causes of systematic error will have the added benefit of also addressing underlying causes of random error.
Absolute judgment for one- and two-dimensional stimuli embedded in Gaussian noise
NASA Technical Reports Server (NTRS)
Kvalseth, T. O.
1977-01-01
This study examines the effect on human performance of adding Gaussian noise or disturbance to the stimuli in absolute judgment tasks involving both one- and two-dimensional stimuli. For each selected stimulus value (both an X-value and a Y-value were generated in the two-dimensional case), 10 values (or 10 pairs of values in the two-dimensional case) were generated from a zero-mean Gaussian variate, added to the selected stimulus value and then served as the coordinate values for the 10 points that were displayed sequentially on a CRT. The results show that human performance, in terms of the information transmitted and rms error as functions of stimulus uncertainty, was significantly reduced as the noise variance increased.
Williams, Camille K.; Tremblay, Luc; Carnahan, Heather
2016-01-01
Researchers in the domain of haptic training are now entering the long-standing debate regarding whether or not it is best to learn a skill by experiencing errors. Haptic training paradigms provide fertile ground for exploring how various theories about feedback, errors and physical guidance intersect during motor learning. Our objective was to determine how error minimizing, error augmenting and no haptic feedback while learning a self-paced curve-tracing task impact performance on delayed (1 day) retention and transfer tests, which indicate learning. We assessed performance using movement time and tracing error to calculate a measure of overall performance – the speed accuracy cost function. Our results showed that despite exhibiting the worst performance during skill acquisition, the error augmentation group had significantly better accuracy (but not overall performance) than the error minimization group on delayed retention and transfer tests. The control group’s performance fell between that of the two experimental groups but was not significantly different from either on the delayed retention test. We propose that the nature of the task (requiring online feedback to guide performance) coupled with the error augmentation group’s frequent off-target experience and rich experience of error-correction promoted information processing related to error-detection and error-correction that are essential for motor learning. PMID:28082937
Signed reward prediction errors drive declarative learning
Naert, Lien; Janssens, Clio; Talsma, Durk; Van Opstal, Filip; Verguts, Tom
2018-01-01
Reward prediction errors (RPEs) are thought to drive learning. This has been established in procedural learning (e.g., classical and operant conditioning). However, empirical evidence on whether RPEs drive declarative learning–a quintessentially human form of learning–remains surprisingly absent. We therefore coupled RPEs to the acquisition of Dutch-Swahili word pairs in a declarative learning paradigm. Signed RPEs (SRPEs; “better-than-expected” signals) during declarative learning improved recognition in a follow-up test, with increasingly positive RPEs leading to better recognition. In addition, classic declarative memory mechanisms such as time-on-task failed to explain recognition performance. The beneficial effect of SRPEs on recognition was subsequently affirmed in a replication study with visual stimuli. PMID:29293493
Signed reward prediction errors drive declarative learning.
De Loof, Esther; Ergo, Kate; Naert, Lien; Janssens, Clio; Talsma, Durk; Van Opstal, Filip; Verguts, Tom
2018-01-01
Reward prediction errors (RPEs) are thought to drive learning. This has been established in procedural learning (e.g., classical and operant conditioning). However, empirical evidence on whether RPEs drive declarative learning-a quintessentially human form of learning-remains surprisingly absent. We therefore coupled RPEs to the acquisition of Dutch-Swahili word pairs in a declarative learning paradigm. Signed RPEs (SRPEs; "better-than-expected" signals) during declarative learning improved recognition in a follow-up test, with increasingly positive RPEs leading to better recognition. In addition, classic declarative memory mechanisms such as time-on-task failed to explain recognition performance. The beneficial effect of SRPEs on recognition was subsequently affirmed in a replication study with visual stimuli.
Karimi, D; Mondor, T A; Mann, D D
2008-01-01
The operation of agricultural vehicles is a multitask activity that requires proper distribution of attentional resources. Human factors theories suggest that proper utilization of the operator's sensory capacities under such conditions can improve the operator's performance and reduce the operator's workload. Using a tractor driving simulator, this study investigated whether auditory cues can be used to improve performance of the operator of an agricultural vehicle. Steering of a vehicle was simulated in visual mode (where driving error was shown to the subject using a lightbar) and in auditory mode (where a pair of speakers were used to convey the driving error direction and/or magnitude). A secondary task was also introduced in order to simulate the monitoring of an attached machine. This task included monitoring of two identical displays, which were placed behind the simulator, and responding to them, when needed, using a joystick. This task was also implemented in auditory mode (in which a beep signaled the subject to push the proper button when a response was needed) and in visual mode (in which there was no beep and visual, monitoring of the displays was necessary). Two levels of difficulty of the monitoring task were used. Deviation of the simulated vehicle from a desired straight line was used as the measure of performance in the steering task, and reaction time to the displays was used as the measure of performance in the monitoring task. Results of the experiments showed that steering performance was significantly better when steering was a visual task (driving errors were 40% to 60% of the driving errors in auditory mode), although subjective evaluations showed that auditory steering could be easier, depending on the implementation. Performance in the monitoring task was significantly better for auditory implementation (reaction time was approximately 6 times shorter), and this result was strongly supported by subjective ratings. The majority of the subjects preferred the combination of visual mode for the steering task and auditory mode for the monitoring task.
Discovering body site and severity modifiers in clinical texts
Dligach, Dmitriy; Bethard, Steven; Becker, Lee; Miller, Timothy; Savova, Guergana K
2014-01-01
Objective To research computational methods for discovering body site and severity modifiers in clinical texts. Methods We cast the task of discovering body site and severity modifiers as a relation extraction problem in the context of a supervised machine learning framework. We utilize rich linguistic features to represent the pairs of relation arguments and delegate the decision about the nature of the relationship between them to a support vector machine model. We evaluate our models using two corpora that annotate body site and severity modifiers. We also compare the model performance to a number of rule-based baselines. We conduct cross-domain portability experiments. In addition, we carry out feature ablation experiments to determine the contribution of various feature groups. Finally, we perform error analysis and report the sources of errors. Results The performance of our method for discovering body site modifiers achieves F1 of 0.740–0.908 and our method for discovering severity modifiers achieves F1 of 0.905–0.929. Discussion Results indicate that both methods perform well on both in-domain and out-domain data, approaching the performance of human annotators. The most salient features are token and named entity features, although syntactic dependency features also contribute to the overall performance. The dominant sources of errors are infrequent patterns in the data and inability of the system to discern deeper semantic structures. Conclusions We investigated computational methods for discovering body site and severity modifiers in clinical texts. Our best system is released open source as part of the clinical Text Analysis and Knowledge Extraction System (cTAKES). PMID:24091648
Discovering body site and severity modifiers in clinical texts.
Dligach, Dmitriy; Bethard, Steven; Becker, Lee; Miller, Timothy; Savova, Guergana K
2014-01-01
To research computational methods for discovering body site and severity modifiers in clinical texts. We cast the task of discovering body site and severity modifiers as a relation extraction problem in the context of a supervised machine learning framework. We utilize rich linguistic features to represent the pairs of relation arguments and delegate the decision about the nature of the relationship between them to a support vector machine model. We evaluate our models using two corpora that annotate body site and severity modifiers. We also compare the model performance to a number of rule-based baselines. We conduct cross-domain portability experiments. In addition, we carry out feature ablation experiments to determine the contribution of various feature groups. Finally, we perform error analysis and report the sources of errors. The performance of our method for discovering body site modifiers achieves F1 of 0.740-0.908 and our method for discovering severity modifiers achieves F1 of 0.905-0.929. Results indicate that both methods perform well on both in-domain and out-domain data, approaching the performance of human annotators. The most salient features are token and named entity features, although syntactic dependency features also contribute to the overall performance. The dominant sources of errors are infrequent patterns in the data and inability of the system to discern deeper semantic structures. We investigated computational methods for discovering body site and severity modifiers in clinical texts. Our best system is released open source as part of the clinical Text Analysis and Knowledge Extraction System (cTAKES).
Local-search based prediction of medical image registration error
NASA Astrophysics Data System (ADS)
Saygili, Görkem
2018-03-01
Medical image registration is a crucial task in many different medical imaging applications. Hence, considerable amount of work has been published recently that aim to predict the error in a registration without any human effort. If provided, these error predictions can be used as a feedback to the registration algorithm to further improve its performance. Recent methods generally start with extracting image-based and deformation-based features, then apply feature pooling and finally train a Random Forest (RF) regressor to predict the real registration error. Image-based features can be calculated after applying a single registration but provide limited accuracy whereas deformation-based features such as variation of deformation vector field may require up to 20 registrations which is a considerably high time-consuming task. This paper proposes to use extracted features from a local search algorithm as image-based features to estimate the error of a registration. The proposed method comprises a local search algorithm to find corresponding voxels between registered image pairs and based on the amount of shifts and stereo confidence measures, it predicts the amount of registration error in millimetres densely using a RF regressor. Compared to other algorithms in the literature, the proposed algorithm does not require multiple registrations, can be efficiently implemented on a Graphical Processing Unit (GPU) and can still provide highly accurate error predictions in existence of large registration error. Experimental results with real registrations on a public dataset indicate a substantially high accuracy achieved by using features from the local search algorithm.
Drenjanac, Domagoj; Tomic, Slobodanka; Agüera, Juan; Perez-Ruiz, Manuel
2014-10-22
In the new agricultural scenarios, the interaction between autonomous tractors and a human operator is important when they jointly perform a task. Obtaining and exchanging accurate localization information between autonomous tractors and the human operator, working as a team, is a critical to maintaining safety, synchronization, and efficiency during the execution of a mission. An advanced localization system for both entities involved in the joint work, i.e., the autonomous tractors and the human operator, provides a basis for meeting the task requirements. In this paper, different localization techniques for a human operator and an autonomous tractor in a field environment were tested. First, we compared the localization performances of two global navigation satellite systems' (GNSS) receivers carried by the human operator: (1) an internal GNSS receiver built into a handheld device; and (2) an external DGNSS receiver with centimeter-level accuracy. To investigate autonomous tractor localization, a real-time kinematic (RTK)-based localization system installed on autonomous tractor developed for agricultural applications was evaluated. Finally, a hybrid localization approach, which combines distance estimates obtained using a wireless scheme with the position of an autonomous tractor obtained using an RTK-GNSS system, is proposed. The hybrid solution is intended for user localization in unstructured environments in which the GNSS signal is obstructed. The hybrid localization approach has two components: (1) a localization algorithm based on the received signal strength indication (RSSI) from the wireless environment; and (2) the acquisition of the tractor RTK coordinates when the human operator is near the tractor. In five RSSI tests, the best result achieved was an average localization error of 4 m. In tests of real-time position correction between rows, RMS error of 2.4 cm demonstrated that the passes were straight, as was desired for the autonomous tractor. From these preliminary results, future work will address the use of autonomous tractor localization in the hybrid localization approach.
Drenjanac, Domagoj; Tomic, Slobodanka; Agüera, Juan; Perez-Ruiz, Manuel
2014-01-01
In the new agricultural scenarios, the interaction between autonomous tractors and a human operator is important when they jointly perform a task. Obtaining and exchanging accurate localization information between autonomous tractors and the human operator, working as a team, is a critical to maintaining safety, synchronization, and efficiency during the execution of a mission. An advanced localization system for both entities involved in the joint work, i.e., the autonomous tractors and the human operator, provides a basis for meeting the task requirements. In this paper, different localization techniques for a human operator and an autonomous tractor in a field environment were tested. First, we compared the localization performances of two global navigation satellite systems’ (GNSS) receivers carried by the human operator: (1) an internal GNSS receiver built into a handheld device; and (2) an external DGNSS receiver with centimeter-level accuracy. To investigate autonomous tractor localization, a real-time kinematic (RTK)-based localization system installed on autonomous tractor developed for agricultural applications was evaluated. Finally, a hybrid localization approach, which combines distance estimates obtained using a wireless scheme with the position of an autonomous tractor obtained using an RTK-GNSS system, is proposed. The hybrid solution is intended for user localization in unstructured environments in which the GNSS signal is obstructed. The hybrid localization approach has two components: (1) a localization algorithm based on the received signal strength indication (RSSI) from the wireless environment; and (2) the acquisition of the tractor RTK coordinates when the human operator is near the tractor. In five RSSI tests, the best result achieved was an average localization error of 4 m. In tests of real-time position correction between rows, RMS error of 2.4 cm demonstrated that the passes were straight, as was desired for the autonomous tractor. From these preliminary results, future work will address the use of autonomous tractor localization in the hybrid localization approach. PMID:25340450
Errors in laboratory medicine: practical lessons to improve patient safety.
Howanitz, Peter J
2005-10-01
Patient safety is influenced by the frequency and seriousness of errors that occur in the health care system. Error rates in laboratory practices are collected routinely for a variety of performance measures in all clinical pathology laboratories in the United States, but a list of critical performance measures has not yet been recommended. The most extensive databases describing error rates in pathology were developed and are maintained by the College of American Pathologists (CAP). These databases include the CAP's Q-Probes and Q-Tracks programs, which provide information on error rates from more than 130 interlaboratory studies. To define critical performance measures in laboratory medicine, describe error rates of these measures, and provide suggestions to decrease these errors, thereby ultimately improving patient safety. A review of experiences from Q-Probes and Q-Tracks studies supplemented with other studies cited in the literature. Q-Probes studies are carried out as time-limited studies lasting 1 to 4 months and have been conducted since 1989. In contrast, Q-Tracks investigations are ongoing studies performed on a yearly basis and have been conducted only since 1998. Participants from institutions throughout the world simultaneously conducted these studies according to specified scientific designs. The CAP has collected and summarized data for participants about these performance measures, including the significance of errors, the magnitude of error rates, tactics for error reduction, and willingness to implement each of these performance measures. A list of recommended performance measures, the frequency of errors when these performance measures were studied, and suggestions to improve patient safety by reducing these errors. Error rates for preanalytic and postanalytic performance measures were higher than for analytic measures. Eight performance measures were identified, including customer satisfaction, test turnaround times, patient identification, specimen acceptability, proficiency testing, critical value reporting, blood product wastage, and blood culture contamination. Error rate benchmarks for these performance measures were cited and recommendations for improving patient safety presented. Not only has each of the 8 performance measures proven practical, useful, and important for patient care, taken together, they also fulfill regulatory requirements. All laboratories should consider implementing these performance measures and standardizing their own scientific designs, data analysis, and error reduction strategies according to findings from these published studies.
Managing human error in aviation.
Helmreich, R L
1997-05-01
Crew resource management (CRM) programs were developed to address team and leadership aspects of piloting modern airplanes. The goal is to reduce errors through team work. Human factors research and social, cognitive, and organizational psychology are used to develop programs tailored for individual airlines. Flight crews study accident case histories, group dynamics, and human error. Simulators provide pilots with the opportunity to solve complex flight problems. CRM in the simulator is called line-oriented flight training (LOFT). In automated cockpits CRM promotes the idea of automation as a crew member. Cultural aspects of aviation include professional, business, and national culture. The aviation CRM model has been adapted for training surgeons and operating room staff in human factors.
Empirical Analysis of Systematic Communication Errors.
1981-09-01
human o~ . .... 8 components in communication systems. (Systematic errors were defined to be those that occur regularly in human communication links...phase of the human communication process and focuses on the linkage between a specific piece of information (and the receiver) and the transmission...communication flow. (2) Exchange. Exchange is the next phase in human communication and entails a concerted effort on the part of the sender and receiver to share
Identifying Human Factors Issues in Aircraft Maintenance Operations
NASA Technical Reports Server (NTRS)
Veinott, Elizabeth S.; Kanki, Barbara G.; Shafto, Michael G. (Technical Monitor)
1995-01-01
Maintenance operations incidents submitted to the Aviation Safety Reporting System (ASRS) between 1986-1992 were systematically analyzed in order to identify issues relevant to human factors and crew coordination. This exploratory analysis involved 95 ASRS reports which represented a wide range of maintenance incidents. The reports were coded and analyzed according to the type of error (e.g, wrong part, procedural error, non-procedural error), contributing factors (e.g., individual, within-team, cross-team, procedure, tools), result of the error (e.g., aircraft damage or not) as well as the operational impact (e.g., aircraft flown to destination, air return, delay at gate). The main findings indicate that procedural errors were most common (48.4%) and that individual and team actions contributed to the errors in more than 50% of the cases. As for operational results, most errors were either corrected after landing at the destination (51.6%) or required the flight crew to stop enroute (29.5%). Interactions among these variables are also discussed. This analysis is a first step toward developing a taxonomy of crew coordination problems in maintenance. By understanding what variables are important and how they are interrelated, we may develop intervention strategies that are better tailored to the human factor issues involved.
Managing Errors to Reduce Accidents in High Consequence Networked Information Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganter, J.H.
1999-02-01
Computers have always helped to amplify and propagate errors made by people. The emergence of Networked Information Systems (NISs), which allow people and systems to quickly interact worldwide, has made understanding and minimizing human error more critical. This paper applies concepts from system safety to analyze how hazards (from hackers to power disruptions) penetrate NIS defenses (e.g., firewalls and operating systems) to cause accidents. Such events usually result from both active, easily identified failures and more subtle latent conditions that have resided in the system for long periods. Both active failures and latent conditions result from human errors. We classifymore » these into several types (slips, lapses, mistakes, etc.) and provide NIS examples of how they occur. Next we examine error minimization throughout the NIS lifecycle, from design through operation to reengineering. At each stage, steps can be taken to minimize the occurrence and effects of human errors. These include defensive design philosophies, architectural patterns to guide developers, and collaborative design that incorporates operational experiences and surprises into design efforts. We conclude by looking at three aspects of NISs that will cause continuing challenges in error and accident management: immaturity of the industry, limited risk perception, and resource tradeoffs.« less
Sarter, Nadine
2008-06-01
The goal of this article is to illustrate the problem-driven, cumulative, and highly interdisciplinary nature of human factors research by providing a brief overview of the work on mode errors on modern flight decks over the past two decades. Mode errors on modem flight decks were first reported in the late 1980s. Poor feedback, inadequate mental models of the automation, and the high degree of coupling and complexity of flight deck systems were identified as main contributors to these breakdowns in human-automation interaction. Various improvements of design, training, and procedures were proposed to address these issues. The author describes when and why the problem of mode errors surfaced, summarizes complementary research activities that helped identify and understand the contributing factors to mode errors, and describes some countermeasures that have been developed in recent years. This brief review illustrates how one particular human factors problem in the aviation domain enabled various disciplines and methodological approaches to contribute to a better understanding of, as well as provide better support for, effective human-automation coordination. Converging operations and interdisciplinary collaboration over an extended period of time are hallmarks of successful human factors research. The reported body of research can serve as a model for future research and as a teaching tool for students in this field of work.
The Swiss cheese model of adverse event occurrence--Closing the holes.
Stein, James E; Heiss, Kurt
2015-12-01
Traditional surgical attitude regarding error and complications has focused on individual failings. Human factors research has brought new and significant insights into the occurrence of error in healthcare, helping us identify systemic problems that injure patients while enhancing individual accountability and teamwork. This article introduces human factors science and its applicability to teamwork, surgical culture, medical error, and individual accountability. Copyright © 2015 Elsevier Inc. All rights reserved.
Behind Human Error: Cognitive Systems, Computers and Hindsight
1994-12-01
evaluations • Organize and/or conduct workshops and conferences CSERIAC is a Department of Defense Information Analysis Cen- ter sponsored by the Defense...Process 185 Neutral Observer Criteria 191 Error Analysis as Causal Judgment 193 Error as Information 195 A Fundamental Surprise 195 What is Human...Kahnemann, 1974), and in risk analysis (Dougherty and Fragola, 1990). The discussions have continued in a wide variety of forums, includ- ing the
Kuselman, Ilya; Pennecchi, Francesca; Epstein, Malka; Fajgelj, Ales; Ellison, Stephen L R
2014-12-01
Monte Carlo simulation of expert judgments on human errors in a chemical analysis was used for determination of distributions of the error quantification scores (scores of likelihood and severity, and scores of effectiveness of a laboratory quality system in prevention of the errors). The simulation was based on modeling of an expert behavior: confident, reasonably doubting and irresolute expert judgments were taken into account by means of different probability mass functions (pmfs). As a case study, 36 scenarios of human errors which may occur in elemental analysis of geological samples by ICP-MS were examined. Characteristics of the score distributions for three pmfs of an expert behavior were compared. Variability of the scores, as standard deviation of the simulated score values from the distribution mean, was used for assessment of the score robustness. A range of the score values, calculated directly from elicited data and simulated by a Monte Carlo method for different pmfs, was also discussed from the robustness point of view. It was shown that robustness of the scores, obtained in the case study, can be assessed as satisfactory for the quality risk management and improvement of a laboratory quality system against human errors. Copyright © 2014 Elsevier B.V. All rights reserved.
Understanding and responding when things go wrong: key principles for primary care educators.
McNab, Duncan; Bowie, Paul; Ross, Alastair; Morrison, Jill
2016-07-01
Learning from events with unwanted outcomes is an important part of workplace based education and providing evidence for medical appraisal and revalidation. It has been suggested that adopting a 'systems approach' could enhance learning and effective change. We believe the following key principles should be understood by all healthcare staff, especially those with a role in developing and delivering educational content for safety and improvement in primary care. When things go wrong, professional accountability involves accepting there has been a problem, apologising if necessary and committing to learn and change. This is easier in a 'Just Culture' where wilful disregard of safe practice is not tolerated but where decisions commensurate with training and experience do not result in blame and punishment. People usually attempt to achieve successful outcomes, but when things go wrong the contribution of hindsight and attribution bias as well as a lack of understanding of conditions and available information (local rationality) can lead to inappropriately blame 'human error'. System complexity makes reduction into component parts difficult; thus attempting to 'find-and-fix' malfunctioning components may not always be a valid approach. Finally, performance variability by staff is often needed to meet demands or cope with resource constraints. We believe understanding these core principles is a necessary precursor to adopting a 'systems approach' that can increase learning and reduce the damaging effects on morale when 'human error' is blamed. This may result in 'human error' becoming the starting point of an investigation and not the endpoint.
Wear, Keith A
2013-04-01
The presence of two longitudinal waves in poroelastic media is predicted by Biot's theory and has been confirmed experimentally in through-transmission measurements in cancellous bone. Estimation of attenuation coefficients and velocities of the two waves is challenging when the two waves overlap in time. The modified least squares Prony's (MLSP) method in conjuction with curve-fitting (MLSP + CF) is tested using simulations based on published values for fast and slow wave attenuation coefficients and velocities in cancellous bone from several studies in bovine femur, human femur, and human calcaneus. The search algorithm is accelerated by exploiting correlations among search parameters. The performance of the algorithm is evaluated as a function of signal-to-noise ratio (SNR). For a typical experimental SNR (40 dB), the root-mean-square errors (RMSEs) for one example (human femur) with fast and slow waves separated by approximately half of a pulse duration were 1 m/s (slow wave velocity), 4 m/s (fast wave velocity), 0.4 dB/cm MHz (slow wave attenuation slope), and 1.7 dB/cm MHz (fast wave attenuation slope). The MLSP + CF method is fast (requiring less than 2 s at SNR = 40 dB on a consumer-grade notebook computer) and is flexible with respect to the functional form of the parametric model for the transmission coefficient. The MLSP + CF method provides sufficient accuracy and precision for many applications such that experimental error is a greater limiting factor than estimation error.
Baek, Jihye; Huh, Jangyoung; Kim, Myungsoo; Hyun An, So; Oh, Yoonjin; Kim, DongYoung; Chung, Kwangzoo; Cho, Sungho; Lee, Rena
2013-02-01
To evaluate the accuracy of measuring volumes using three-dimensional ultrasound (3D US), and to verify the feasibility of the replacement of CT-MR fusion images with CT-3D US in radiotherapy treatment planning. Phantoms, consisting of water, contrast agent, and agarose, were manufactured. The volume was measured using 3D US, CT, and MR devices. A CT-3D US and MR-3D US image fusion software was developed using the Insight Toolkit library in order to acquire three-dimensional fusion images. The quality of the image fusion was evaluated using metric value and fusion images. Volume measurement, using 3D US, shows a 2.8 ± 1.5% error, 4.4 ± 3.0% error for CT, and 3.1 ± 2.0% error for MR. The results imply that volume measurement using the 3D US devices has a similar accuracy level to that of CT and MR. Three-dimensional image fusion of CT-3D US and MR-3D US was successfully performed using phantom images. Moreover, MR-3D US image fusion was performed using human bladder images. 3D US could be used in the volume measurement of human bladders and prostates. CT-3D US image fusion could be used in monitoring the target position in each fraction of external beam radiation therapy. Moreover, the feasibility of replacing the CT-MR image fusion to the CT-3D US in radiotherapy treatment planning was verified.
Petermeijer, Sebastiaan M; Abbink, David A; de Winter, Joost C F
2015-02-01
The aim of this study was to compare continuous versus bandwidth haptic steering guidance in terms of lane-keeping behavior, aftereffects, and satisfaction. An important human factors question is whether operators should be supported continuously or only when tolerance limits are exceeded. We aimed to clarify this issue for haptic steering guidance by investigating costs and benefits of both approaches in a driving simulator. Thirty-two participants drove five trials, each with a different level of haptic support: no guidance (Manual); guidance outside a 0.5-m bandwidth (Band1); a hysteresis version of Band1, which guided back to the lane center once triggered (Band2); continuous guidance (Cont); and Cont with double feedback gain (ContS). Participants performed a reaction time task while driving. Toward the end of each trial, the guidance was unexpectedly disabled to investigate aftereffects. All four guidance systems prevented large lateral errors (>0.7 m). Cont and especially ContS yielded smaller lateral errors and higher time to line crossing than Manual, Band1, and Band2. Cont and ContS yielded short-lasting aftereffects, whereas Band1 and Band2 did not. Cont yielded higher self-reported satisfaction and faster reaction times than Band1. Continuous and bandwidth guidance both prevent large driver errors. Continuous guidance yields improved performance and satisfaction over bandwidth guidance at the cost of aftereffects and variability in driver torque (indicating human-automation conflicts). The presented results are useful for designers of haptic guidance systems and support critical thinking about the costs and benefits of automation support systems.
On the Confounding Effect of Temperature on Chemical Shift-Encoded Fat Quantification
Hernando, Diego; Sharma, Samir D.; Kramer, Harald; Reeder, Scott B.
2014-01-01
Purpose To characterize the confounding effect of temperature on chemical shift-encoded (CSE) fat quantification. Methods The proton resonance frequency of water, unlike triglycerides, depends on temperature. This leads to a temperature dependence of the spectral models of fat (relative to water) that are commonly used by CSE-MRI methods. Simulation analysis was performed for 1.5 Tesla CSE fat–water signals at various temperatures and echo time combinations. Oil–water phantoms were constructed and scanned at temperatures between 0 and 40°C using spectroscopy and CSE imaging at three echo time combinations. An explanted human liver, rejected for transplantation due to steatosis, was scanned using spectroscopy and CSE imaging. Fat–water reconstructions were performed using four different techniques: magnitude and complex fitting, with standard or temperature-corrected signal modeling. Results In all experiments, magnitude fitting with standard signal modeling resulted in large fat quantification errors. Errors were largest for echo time combinations near TEinit ≈ 1.3 ms, ΔTE ≈ 2.2 ms. Errors in fat quantification caused by temperature-related frequency shifts were smaller with complex fitting, and were avoided using a temperature-corrected signal model. Conclusion Temperature is a confounding factor for fat quantification. If not accounted for, it can result in large errors in fat quantifications in phantom and ex vivo acquisitions. PMID:24123362
Correction of Refractive Errors in Rhesus Macaques (Macaca mulatta) Involved in Visual Research
Mitchell, Jude F; Boisvert, Chantal J; Reuter, Jon D; Reynolds, John H; Leblanc, Mathias
2014-01-01
Macaques are the most common animal model for studies in vision research, and due to their high value as research subjects, often continue to participate in studies well into old age. As is true in humans, visual acuity in macaques is susceptible to refractive errors. Here we report a case study in which an aged macaque demonstrated clear impairment in visual acuity according to performance on a demanding behavioral task. Refraction demonstrated bilateral myopia that significantly affected behavioral and visual tasks. Using corrective lenses, we were able to restore visual acuity. After correction of myopia, the macaque's performance on behavioral tasks was comparable to that of a healthy control. We screened 20 other male macaques to assess the incidence of refractive errors and ocular pathologies in a larger population. Hyperopia was the most frequent ametropia but was mild in all cases. A second macaque had mild myopia and astigmatism in one eye. There were no other pathologies observed on ocular examination. We developed a simple behavioral task that visual research laboratories could use to test visual acuity in macaques. The test was reliable and easily learned by the animals in 1 d. This case study stresses the importance of screening macaques involved in visual science for refractive errors and ocular pathologies to ensure the quality of research; we also provide simple methodology for screening visual acuity in these animals. PMID:25427343
Correction of refractive errors in rhesus macaques (Macaca mulatta) involved in visual research.
Mitchell, Jude F; Boisvert, Chantal J; Reuter, Jon D; Reynolds, John H; Leblanc, Mathias
2014-08-01
Macaques are the most common animal model for studies in vision research, and due to their high value as research subjects, often continue to participate in studies well into old age. As is true in humans, visual acuity in macaques is susceptible to refractive errors. Here we report a case study in which an aged macaque demonstrated clear impairment in visual acuity according to performance on a demanding behavioral task. Refraction demonstrated bilateral myopia that significantly affected behavioral and visual tasks. Using corrective lenses, we were able to restore visual acuity. After correction of myopia, the macaque's performance on behavioral tasks was comparable to that of a healthy control. We screened 20 other male macaques to assess the incidence of refractive errors and ocular pathologies in a larger population. Hyperopia was the most frequent ametropia but was mild in all cases. A second macaque had mild myopia and astigmatism in one eye. There were no other pathologies observed on ocular examination. We developed a simple behavioral task that visual research laboratories could use to test visual acuity in macaques. The test was reliable and easily learned by the animals in 1 d. This case study stresses the importance of screening macaques involved in visual science for refractive errors and ocular pathologies to ensure the quality of research; we also provide simple methodology for screening visual acuity in these animals.
Comparison of Highly Resolved Model-Based Exposure ...
Human exposure to air pollution in many studies is represented by ambient concentrations from space-time kriging of observed values. Space-time kriging techniques based on a limited number of ambient monitors may fail to capture the concentration from local sources. Further, because people spend more time indoors, using ambient concentration to represent exposure may cause error. To quantify the associated exposure error, we computed a series of six different hourly-based exposure metrics at 16,095 Census blocks of three Counties in North Carolina for CO, NOx, PM2.5, and elemental carbon (EC) during 2012. These metrics include ambient background concentration from space-time ordinary kriging (STOK), ambient on-road concentration from the Research LINE source dispersion model (R-LINE), a hybrid concentration combining STOK and R-LINE, and their associated indoor concentrations from an indoor infiltration mass balance model. Using a hybrid-based indoor concentration as the standard, the comparison showed that outdoor STOK metrics yielded large error at both population (67% to 93%) and individual level (average bias between −10% to 95%). For pollutants with significant contribution from on-road emission (EC and NOx), the on-road based indoor metric performs the best at the population level (error less than 52%). At the individual level, however, the STOK-based indoor concentration performs the best (average bias below 30%). For PM2.5, due to the relatively low co
Human-computer interaction in multitask situations
NASA Technical Reports Server (NTRS)
Rouse, W. B.
1977-01-01
Human-computer interaction in multitask decisionmaking situations is considered, and it is proposed that humans and computers have overlapping responsibilities. Queueing theory is employed to model this dynamic approach to the allocation of responsibility between human and computer. Results of simulation experiments are used to illustrate the effects of several system variables including number of tasks, mean time between arrivals of action-evoking events, human-computer speed mismatch, probability of computer error, probability of human error, and the level of feedback between human and computer. Current experimental efforts are discussed and the practical issues involved in designing human-computer systems for multitask situations are considered.
Systematic errors of EIT systems determined by easily-scalable resistive phantoms.
Hahn, G; Just, A; Dittmar, J; Hellige, G
2008-06-01
We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design.
Dynamic Neural Correlates of Motor Error Monitoring and Adaptation during Trial-to-Trial Learning
Tan, Huiling; Jenkinson, Ned
2014-01-01
A basic EEG feature upon voluntary movements in healthy human subjects is a β (13–30 Hz) band desynchronization followed by a postmovement event-related synchronization (ERS) over contralateral sensorimotor cortex. The functional implications of these changes remain unclear. We hypothesized that, because β ERS follows movement, it may reflect the degree of error in that movement, and the salience of that error to the task at hand. As such, the signal might underpin trial-to-trial modifications of the internal model that informs future movements. To test this hypothesis, EEG was recorded in healthy subjects while they moved a joystick-controlled cursor to visual targets on a computer screen, with different rotational perturbations applied between the joystick and cursor. We observed consistently lower β ERS in trials with large error, even when other possible motor confounds, such as reaction time, movement duration, and path length, were controlled, regardless of whether the perturbation was random or constant. There was a negative trial-to-trial correlation between the size of the absolute initial angular error and the amplitude of the β ERS, and this negative correlation was enhanced when other contextual information about the behavioral salience of the angular error, namely, the bias and variance of errors in previous trials, was additionally considered. These same features also had an impact on the behavioral performance. The findings suggest that the β ERS reflects neural processes that evaluate motor error and do so in the context of the prior history of errors. PMID:24741058
Kis, Anna; Gácsi, Márta; Range, Friederike; Virányi, Zsófia
2012-01-01
In this paper, we describe a behaviour pattern similar to the "A-not-B" error found in human infants and young apes in a monkey species, the common marmosets (Callithrix jacchus). In contrast to the classical explanation, recently it has been suggested that the "A-not-B" error committed by human infants is at least partially due to misinterpretation of the hider's ostensively communicated object hiding actions as potential 'teaching' demonstrations during the A trials. We tested whether this so-called Natural Pedagogy hypothesis would account for the A-not-B error that marmosets commit in a standard object permanence task, but found no support for the hypothesis in this species. Alternatively, we present evidence that lower level mechanisms, such as attention and motivation, play an important role in committing the "A-not-B" error in marmosets. We argue that these simple mechanisms might contribute to the effect of undeveloped object representational skills in other species including young non-human primates that commit the A-not-B error.
A Hybrid Model for Predicting the Prevalence of Schistosomiasis in Humans of Qianjiang City, China
Wang, Ying; Lu, Zhouqin; Tian, Lihong; Tan, Li; Shi, Yun; Nie, Shaofa; Liu, Li
2014-01-01
Backgrounds/Objective Schistosomiasis is still a major public health problem in China, despite the fact that the government has implemented a series of strategies to prevent and control the spread of the parasitic disease. Advanced warning and reliable forecasting can help policymakers to adjust and implement strategies more effectively, which will lead to the control and elimination of schistosomiasis. Our aim is to explore the application of a hybrid forecasting model to track the trends of the prevalence of schistosomiasis in humans, which provides a methodological basis for predicting and detecting schistosomiasis infection in endemic areas. Methods A hybrid approach combining the autoregressive integrated moving average (ARIMA) model and the nonlinear autoregressive neural network (NARNN) model to forecast the prevalence of schistosomiasis in the future four years. Forecasting performance was compared between the hybrid ARIMA-NARNN model, and the single ARIMA or the single NARNN model. Results The modelling mean square error (MSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) of the ARIMA-NARNN model was 0.1869×10−4, 0.0029, 0.0419 with a corresponding testing error of 0.9375×10−4, 0.0081, 0.9064, respectively. These error values generated with the hybrid model were all lower than those obtained from the single ARIMA or NARNN model. The forecasting values were 0.75%, 0.80%, 0.76% and 0.77% in the future four years, which demonstrated a no-downward trend. Conclusion The hybrid model has high quality prediction accuracy in the prevalence of schistosomiasis, which provides a methodological basis for future schistosomiasis monitoring and control strategies in the study area. It is worth attempting to utilize the hybrid detection scheme in other schistosomiasis-endemic areas including other infectious diseases. PMID:25119882
Simplification of the kinematic model of human movement
NASA Astrophysics Data System (ADS)
Dusza, Jacek J.; Wawrzyniak, Zbigniew M.; del Prado Martinez, David
2013-10-01
The paper presents a methods of simplification of the human gait model. The experimental data were obtained in the laboratory of the group SATI in the Electronics Engineering Department of the University of Valencia. As a result of the Mean Double Step (MDS) procedure, the human motion were described by a matrix containing the Cartesian coordinates of 26 markers placed on the human body recorded in the 100 time points. With these data it has been possible to develop an software application which performs a wide diversity of tasks like array simplification, mask calculation for the simplification, error calculation as well as tools for signals comparison and movement animation of the markers. Simplifications were made by the spectral analysis of signals and calculating the standard deviation of the differences between the signal and its approximation. Using this method the signals of displacement could be written as the time series limited to a small number of harmonic signals. This approach allows us for a high degree of data compression. The model presented in this work can be applied into the context of medical diagnostics or rehabilitation because for a given approximation error and a large number of harmonics may demonstrate some abnormalities (of orthopaedic symptoms) in the gait cycle analysis.
Human factors and ergonomics as a patient safety practice
Carayon, Pascale; Xie, Anping; Kianfar, Sarah
2014-01-01
Background Human factors and ergonomics (HFE) approaches to patient safety have addressed five different domains: usability of technology; human error and its role in patient safety; the role of healthcare worker performance in patient safety; system resilience; and HFE systems approaches to patient safety. Methods A review of various HFE approaches to patient safety and studies on HFE interventions was conducted. Results This paper describes specific examples of HFE-based interventions for patient safety. Studies show that HFE can be used in a variety of domains. Conclusions HFE is a core element of patient safety improvement. Therefore, every effort should be made to support HFE applications in patient safety. PMID:23813211
DOE Office of Scientific and Technical Information (OSTI.GOV)
W. J. Galyean; A. M. Whaley; D. L. Kelly
This guide provides step-by-step guidance on the use of the SPAR-H method for quantifying Human Failure Events (HFEs). This guide is intended to be used with the worksheets provided in: 'The SPAR-H Human Reliability Analysis Method,' NUREG/CR-6883, dated August 2005. Each step in the process of producing a Human Error Probability (HEP) is discussed. These steps are: Step-1, Categorizing the HFE as Diagnosis and/or Action; Step-2, Rate the Performance Shaping Factors; Step-3, Calculate PSF-Modified HEP; Step-4, Accounting for Dependence, and; Step-5, Minimum Value Cutoff. The discussions on dependence are extensive and include an appendix that describes insights obtained from themore » psychology literature.« less
Probabilistic simulation of the human factor in structural reliability
NASA Technical Reports Server (NTRS)
Chamis, C. C.
1993-01-01
A formal approach is described in an attempt to computationally simulate the probable ranges of uncertainties of the human factor in structural probabilistic assessments. A multi-factor interaction equation (MFIE) model has been adopted for this purpose. Human factors such as marital status, professional status, home life, job satisfaction, work load and health, are considered to demonstrate the concept. Parametric studies in conjunction with judgment are used to select reasonable values for the participating factors (primitive variables). Suitability of the MFIE in the subsequently probabilistic sensitivity studies are performed to assess the validity of the whole approach. Results obtained show that the uncertainties for no error range from five to thirty percent for the most optimistic case.
Simulation of the human-telerobot interface on the Space Station
NASA Technical Reports Server (NTRS)
Stuart, Mark A.; Smith, Randy L.
1993-01-01
Many issues remain unresolved concerning the components of the human-telerobot interface presented in this work. It is critical that these components be optimally designed and arranged to ensure, not only that the overall system's goals are met, but but that the intended end-user has been optimally accommodated. With sufficient testing and evaluation throughout the development cycle, the selection of the components to use in the final telerobotic system can promote efficient, error-free performance. It is recommended that whole-system simulation with full-scale mockups be used to help design the human-telerobot interface. It is contended that the use of simulation can facilitate this design and evaluation process.
NASA Technical Reports Server (NTRS)
Jaeger, R. J.; Agarwal, G. C.; Gottlieb, G. L.
1978-01-01
Subjects can correct their own errors of movement more quickly than they can react to external stimuli by using three general categories of feedback: (1) knowledge of results, primarily visually mediated; (2) proprioceptive or kinaesthetic such as from muscle spindles and joint receptors, and (3) corollary discharge or efference copy within the central nervous system. The effects of these feedbacks on simple reaction time, choice reaction time, and error correction time were studied in four normal human subjects. The movement used was plantarflexion and dorsiflexion of the ankle joint. The feedback loops were modified, by changing the sign of the visual display to alter the subject's perception of results, and by applying vibration at 100 Hz simultaneously to both the agonist and antagonist muscles of the ankle joint. The central processing was interfered with when the subjects were given moderate doses of alcohol (blood alcohol concentration levels of up to 0.07%). Vibration and alcohol increase both the simple and choice reaction times but not the error correction time.
Flexible methods for segmentation evaluation: results from CT-based luggage screening.
Karimi, Seemeen; Jiang, Xiaoqian; Cosman, Pamela; Martz, Harry
2014-01-01
Imaging systems used in aviation security include segmentation algorithms in an automatic threat recognition pipeline. The segmentation algorithms evolve in response to emerging threats and changing performance requirements. Analysis of segmentation algorithms' behavior, including the nature of errors and feature recovery, facilitates their development. However, evaluation methods from the literature provide limited characterization of the segmentation algorithms. To develop segmentation evaluation methods that measure systematic errors such as oversegmentation and undersegmentation, outliers, and overall errors. The methods must measure feature recovery and allow us to prioritize segments. We developed two complementary evaluation methods using statistical techniques and information theory. We also created a semi-automatic method to define ground truth from 3D images. We applied our methods to evaluate five segmentation algorithms developed for CT luggage screening. We validated our methods with synthetic problems and an observer evaluation. Both methods selected the same best segmentation algorithm. Human evaluation confirmed the findings. The measurement of systematic errors and prioritization helped in understanding the behavior of each segmentation algorithm. Our evaluation methods allow us to measure and explain the accuracy of segmentation algorithms.
Frequency-specific hippocampal-prefrontal interactions during associative learning
Brincat, Scott L.; Miller, Earl K.
2015-01-01
Much of our knowledge of the world depends on learning associations (e.g., face-name), for which the hippocampus (HPC) and prefrontal cortex (PFC) are critical. HPC-PFC interactions have rarely been studied in monkeys, whose cognitive/mnemonic abilities are akin to humans. Here, we show functional differences and frequency-specific interactions between HPC and PFC of monkeys learning object-pair associations, an animal model of human explicit memory. PFC spiking activity reflected learning in parallel with behavioral performance, while HPC neurons reflected feedback about whether trial-and-error guesses were correct or incorrect. Theta-band HPC-PFC synchrony was stronger after errors, was driven primarily by PFC to HPC directional influences, and decreased with learning. In contrast, alpha/beta-band synchrony was stronger after correct trials, was driven more by HPC, and increased with learning. Rapid object associative learning may occur in PFC, while HPC may guide neocortical plasticity by signaling success or failure via oscillatory synchrony in different frequency bands. PMID:25706471
Simultaneous estimation of human and exoskeleton motion: A simplified protocol.
Alvarez, M T; Torricelli, D; Del-Ama, A J; Pinto, D; Gonzalez-Vargas, J; Moreno, J C; Gil-Agudo, A; Pons, J L
2017-07-01
Adequate benchmarking procedures in the area of wearable robots is gaining importance in order to compare different devices on a quantitative basis, improve them and support the standardization and regulation procedures. Performance assessment usually focuses on the execution of locomotion tasks, and is mostly based on kinematic-related measures. Typical drawbacks of marker-based motion capture systems, gold standard for measure of human limb motion, become challenging when measuring limb kinematics, due to the concomitant presence of the robot. This work answers the question of how to reliably assess the subject's body motion by placing markers over the exoskeleton. Focusing on the ankle joint, the proposed methodology showed that it is possible to reconstruct the trajectory of the subject's joint by placing markers on the exoskeleton, although foot flexibility during walking can impact the reconstruction accuracy. More experiments are needed to confirm this hypothesis, and more subjects and walking conditions are needed to better characterize the errors of the proposed methodology, although our results are promising, indicating small errors.
NASA Technical Reports Server (NTRS)
Barshi, Immanuel
2016-01-01
Multitasking is endemic in modern life and work: drivers talk on cell phones, office workers type while answering phone calls, students do homework while text messaging, nurses prepare injections while responding to doctors calls, and air traffic controllers direct aircraft in one sector while handling additional traffic in another. Whether in daily life or at work, we are constantly bombarded with multiple, concurrent interruptions and demands and we have all somehow come to believe in the myth that we can, and in fact are expected to, easily address them all - without any repercussions. However, accumulating scientific evidence is now suggesting that multitasking increases the probability of human error. This talk presents a set of NASA studies that characterize concurrent demands in one work domain, routine airline cockpit operations, in order to illustrate the ways operational task demands together with the proclivity to manage them all concurrently make human performance in this and in any domain vulnerable to potentially serious errors and to accidents.
2017-01-01
We selected iOS in this study as the App operation system, Objective-C as the programming language, and Oracle as the database to develop an App to inspect controlled substances in patient care units. Using a web-enabled smartphone, pharmacist inspection can be performed on site and the inspection result can be directly recorded into HIS through the Internet, so human error of data translation can be minimized and the work efficiency and data processing can be improved. This system not only is fast and convenient compared to the conventional paperwork, but also provides data security and accuracy. In addition, there are several features to increase inspecting quality: (1) accuracy of drug appearance, (2) foolproof mechanism to avoid input errors or miss, (3) automatic data conversion without human judgments, (4) online alarm of expiry date, and (5) instant inspection result to show not meted items. This study has successfully turned paper-based medication inspection into inspection using a web-based mobile device. PMID:28286761
NASA Astrophysics Data System (ADS)
Margarit, G.
2013-12-01
This paper presents the results obtained by GMV in the maritime surveillance operational activations conducted in a set of research projects. These activations have been actively supported by users, which feedback has been essential for better understanding their needs and the most urgent requested improvements. Different domains have been evaluated from pure theoretical and scientific background (in terms of processing algorithms) up to pure logistic issues (IT configuration issues, strategies for improving system performance and avoiding bottlenecks, parallelization and back-up procedures). In all the cases, automatizing is the key work because users need almost real time operations where the interaction of human operators is minimized. In addition, automatizing permits reducing human-derived errors and provides better error tracking procedures. In the paper, different examples will be depicted and analysed. For sake of space limitation, only the most representative ones will be selected. Feedback from users will be include and analysed as well.
Lu, Ying-Hao; Lee, Li-Yao; Chen, Ying-Lan; Cheng, Hsing-I; Tsai, Wen-Tsung; Kuo, Chen-Chun; Chen, Chung-Yu; Huang, Yaw-Bin
2017-01-01
We selected iOS in this study as the App operation system, Objective-C as the programming language, and Oracle as the database to develop an App to inspect controlled substances in patient care units. Using a web-enabled smartphone, pharmacist inspection can be performed on site and the inspection result can be directly recorded into HIS through the Internet, so human error of data translation can be minimized and the work efficiency and data processing can be improved. This system not only is fast and convenient compared to the conventional paperwork, but also provides data security and accuracy. In addition, there are several features to increase inspecting quality: (1) accuracy of drug appearance, (2) foolproof mechanism to avoid input errors or miss, (3) automatic data conversion without human judgments, (4) online alarm of expiry date, and (5) instant inspection result to show not meted items. This study has successfully turned paper-based medication inspection into inspection using a web-based mobile device.
An IMU-to-Body Alignment Method Applied to Human Gait Analysis.
Vargas-Valencia, Laura Susana; Elias, Arlindo; Rocon, Eduardo; Bastos-Filho, Teodiano; Frizera, Anselmo
2016-12-10
This paper presents a novel calibration procedure as a simple, yet powerful, method to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed method is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration method is applied, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the method also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU) technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis.
Analysis of Factors Influencing Measurement Accuracy of Al Alloy Tensile Test Results
NASA Astrophysics Data System (ADS)
Podgornik, Bojan; Žužek, Borut; Sedlaček, Marko; Kevorkijan, Varužan; Hostej, Boris
2016-02-01
In order to properly use materials in design, a complete understanding of and information on their mechanical properties, such as yield and ultimate tensile strength must be obtained. Furthermore, as the design of automotive parts is constantly pushed toward higher limits, excessive measuring uncertainty can lead to unexpected premature failure of the component, thus requiring reliable determination of material properties with low uncertainty. The aim of the present work was to evaluate the effect of different metrology factors, including the number of tested samples, specimens machining and surface quality, specimens input diameter, type of testing and human error on the tensile test results and measurement uncertainty when performed on 2xxx series Al alloy. Results show that the most significant contribution to measurement uncertainty comes from the number of samples tested, which can even exceed 1 %. Furthermore, moving from experimental laboratory conditions to very intense industrial environment further amplifies measurement uncertainty, where even if using automated systems human error cannot be neglected.
Updating expected action outcome in the medial frontal cortex involves an evaluation of error type.
Maier, Martin E; Steinhauser, Marco
2013-10-02
Forming expectations about the outcome of an action is an important prerequisite for action control and reinforcement learning in the human brain. The medial frontal cortex (MFC) has been shown to play an important role in the representation of outcome expectations, particularly when an update of expected outcome becomes necessary because an error is detected. However, error detection alone is not always sufficient to compute expected outcome because errors can occur in various ways and different types of errors may be associated with different outcomes. In the present study, we therefore investigate whether updating expected outcome in the human MFC is based on an evaluation of error type. Our approach was to consider an electrophysiological correlate of MFC activity on errors, the error-related negativity (Ne/ERN), in a task in which two types of errors could occur. Because the two error types were associated with different amounts of monetary loss, updating expected outcomes on error trials required an evaluation of error type. Our data revealed a pattern of Ne/ERN amplitudes that closely mirrored the amount of monetary loss associated with each error type, suggesting that outcome expectations are updated based on an evaluation of error type. We propose that this is achieved by a proactive evaluation process that anticipates error types by continuously monitoring error sources or by dynamically representing possible response-outcome relations.
Dosimetry audit simulation of treatment planning system in multicenters radiotherapy
NASA Astrophysics Data System (ADS)
Kasmuri, S.; Pawiro, S. A.
2017-07-01
Treatment Planning System (TPS) is an important modality that determines radiotherapy outcome. TPS requires input data obtained through commissioning and the potentially error occurred. Error in this stage may result in the systematic error. The aim of this study to verify the TPS dosimetry to know deviation range between calculated and measurement dose. This study used CIRS phantom 002LFC representing the human thorax and simulated all external beam radiotherapy stages. The phantom was scanned using CT Scanner and planned 8 test cases that were similar to those in clinical practice situation were made, tested in four radiotherapy centers. Dose measurement using 0.6 cc ionization chamber. The results of this study showed that generally, deviation of all test cases in four centers was within agreement criteria with average deviation about -0.17±1.59 %, -1.64±1.92 %, 0.34±1.34 % and 0.13±1.81 %. The conclusion of this study was all TPS involved in this study showed good performance. The superposition algorithm showed rather poor performance than either analytic anisotropic algorithm (AAA) and convolution algorithm with average deviation about -1.64±1.92 %, -0.17±1.59 % and -0.27±1.51 % respectively.
A framework for analyzing the cognitive complexity of computer-assisted clinical ordering.
Horsky, Jan; Kaufman, David R; Oppenheim, Michael I; Patel, Vimla L
2003-01-01
Computer-assisted provider order entry is a technology that is designed to expedite medical ordering and to reduce the frequency of preventable errors. This paper presents a multifaceted cognitive methodology for the characterization of cognitive demands of a medical information system. Our investigation was informed by the distributed resources (DR) model, a novel approach designed to describe the dimensions of user interfaces that introduce unnecessary cognitive complexity. This method evaluates the relative distribution of external (system) and internal (user) representations embodied in system interaction. We conducted an expert walkthrough evaluation of a commercial order entry system, followed by a simulated clinical ordering task performed by seven clinicians. The DR model was employed to explain variation in user performance and to characterize the relationship of resource distribution and ordering errors. The analysis revealed that the configuration of resources in this ordering application placed unnecessarily heavy cognitive demands on the user, especially on those who lacked a robust conceptual model of the system. The resources model also provided some insight into clinicians' interactive strategies and patterns of associated errors. Implications for user training and interface design based on the principles of human-computer interaction in the medical domain are discussed.
Multi-Unit Considerations for Human Reliability Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
St. Germain, S.; Boring, R.; Banaseanu, G.
This paper uses the insights from the Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) methodology to help identify human actions currently modeled in the single unit PSA that may need to be modified to account for additional challenges imposed by a multi-unit accident as well as identify possible new human actions that might be modeled to more accurately characterize multi-unit risk. In identifying these potential human action impacts, the use of the SPAR-H strategy to include both errors in diagnosis and errors in action is considered as well as identifying characteristics of a multi-unit accident scenario that may impact themore » selection of the performance shaping factors (PSFs) used in SPAR-H. The lessons learned from the Fukushima Daiichi reactor accident will be addressed to further help identify areas where improved modeling may be required. While these multi-unit impacts may require modifications to a Level 1 PSA model, it is expected to have much more importance for Level 2 modeling. There is little currently written specifically about multi-unit HRA issues. A review of related published research will be presented. While this paper cannot answer all issues related to multi-unit HRA, it will hopefully serve as a starting point to generate discussion and spark additional ideas towards the proper treatment of HRA in a multi-unit PSA.« less
Helle, Samuli
2018-03-01
Revealing causal effects from correlative data is very challenging and a contemporary problem in human life history research owing to the lack of experimental approach. Problems with causal inference arising from measurement error in independent variables, whether related either to inaccurate measurement technique or validity of measurements, seem not well-known in this field. The aim of this study is to show how structural equation modeling (SEM) with latent variables can be applied to account for measurement error in independent variables when the researcher has recorded several indicators of a hypothesized latent construct. As a simple example of this approach, measurement error in lifetime allocation of resources to reproduction in Finnish preindustrial women is modelled in the context of the survival cost of reproduction. In humans, lifetime energetic resources allocated in reproduction are almost impossible to quantify with precision and, thus, typically used measures of lifetime reproductive effort (e.g., lifetime reproductive success and parity) are likely to be plagued by measurement error. These results are contrasted with those obtained from a traditional regression approach where the single best proxy of lifetime reproductive effort available in the data is used for inference. As expected, the inability to account for measurement error in women's lifetime reproductive effort resulted in the underestimation of its underlying effect size on post-reproductive survival. This article emphasizes the advantages that the SEM framework can provide in handling measurement error via multiple-indicator latent variables in human life history studies. © 2017 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takahashi, Y; National Cancer Center, Kashiwa, Chiba; Tachibana, H
Purpose: Total body irradiation (TBI) and total marrow irradiation (TMI) using Tomotherapy have been reported. A gantry-based linear accelerator uses one isocenter during one rotational irradiation. Thus, 3–5 isocenter points should be used for a whole plan of TBI-VMAT during smoothing out the junctional dose distribution. IGRT provides accurate and precise patient setup for the multiple junctions, however it is evident that some setup errors should occur and affect accuracy of dose distribution in the area. In this study, we evaluated the robustness for patient’s setup error in VMAT-TBI. Methods: VMAT-TBI Planning was performed in an adult whole-body human phantommore » using Eclipse. Eight full arcs with four isocenter points using 6MV-X were used to cover the entire whole body. Dose distribution was optimized using two structures of patient’s body as PTV and lung. The two arcs were shared with one isocenter and the two arcs were 5 cm-overlapped with the other two arcs. Point absolute dose using ionization-chamber and planer relative dose distribution using film in the junctional regions were performed using water-equivalent slab phantom. In the measurements, several setup errors of (+5∼−5mm) were added. Results: The result of the chamber measurement shows the deviations were within ±3% when the setup errors were within ±3 mm. In the planer evaluation, the pass ratio of gamma evaluation (3%/2mm) shows more than 90% if the errors within ±3 mm. However, there were hot/cold areas in the edge of the junction even with acceptable gamma pass ratio. 5 mm setup error caused larger hot and cold areas and the dosimetric acceptable areas were decreased in the overlapped areas. Conclusion: It can be clinically acceptable for VMAT-TBI when patient setup error is within ±3mm. Averaging effects from patient random error would be helpful to blur the hot/cold area in the junction.« less
Kaufhold, John P; Tsai, Philbert S; Blinder, Pablo; Kleinfeld, David
2012-08-01
A graph of tissue vasculature is an essential requirement to model the exchange of gasses and nutriments between the blood and cells in the brain. Such a graph is derived from a vectorized representation of anatomical data, provides a map of all vessels as vertices and segments, and may include the location of nonvascular components, such as neuronal and glial somata. Yet vectorized data sets typically contain erroneous gaps, spurious endpoints, and spuriously merged strands. Current methods to correct such defects only address the issue of connecting gaps and further require manual tuning of parameters in a high dimensional algorithm. To address these shortcomings, we introduce a supervised machine learning method that (1) connects vessel gaps by "learned threshold relaxation"; (2) removes spurious segments by "learning to eliminate deletion candidate strands"; and (3) enforces consistency in the joint space of learned vascular graph corrections through "consistency learning." Human operators are only required to label individual objects they recognize in a training set and are not burdened with tuning parameters. The supervised learning procedure examines the geometry and topology of features in the neighborhood of each vessel segment under consideration. We demonstrate the effectiveness of these methods on four sets of microvascular data, each with >800(3) voxels, obtained with all optical histology of mouse tissue and vectorization by state-of-the-art techniques in image segmentation. Through statistically validated sampling and analysis in terms of precision recall curves, we find that learning with bagged boosted decision trees reduces equal-error error rates for threshold relaxation by 5-21% and strand elimination performance by 18-57%. We benchmark generalization performance across datasets; while improvements vary between data sets, learning always leads to a useful reduction in error rates. Overall, learning is shown to more than halve the total error rate, and therefore, human time spent manually correcting such vectorizations. Copyright © 2012 Elsevier B.V. All rights reserved.
Kaufhold, John P.; Tsai, Philbert S.; Blinder, Pablo; Kleinfeld, David
2012-01-01
A graph of tissue vasculature is an essential requirement to model the exchange of gasses and nutriments between the blood and cells in the brain. Such a graph is derived from a vectorized representation of anatomical data, provides a map of all vessels as vertices and segments, and may include the location of nonvascular components, such as neuronal and glial somata. Yet vectorized data sets typically contain erroneous gaps, spurious endpoints, and spuriously merged strands. Current methods to correct such defects only address the issue of connecting gaps and further require manual tuning of parameters in a high dimensional algorithm. To address these shortcomings, we introduce a supervised machine learning method that (1) connects vessel gaps by “learned threshold relaxation”; (2) removes spurious segments by “learning to eliminate deletion candidate strands”; and (3) enforces consistency in the joint space of learned vascular graph corrections through “consistency learning.” Human operators are only required to label individual objects they recognize in a training set and are not burdened with tuning parameters. The supervised learning procedure examines the geometry and topology of features in the neighborhood of each vessel segment under consideration. We demonstrate the effectiveness of these methods on four sets of microvascular data, each with > 8003 voxels, obtained with all optical histology of mouse tissue and vectorization by state-of-the-art techniques in image segmentation. Through statistically validated sampling and analysis in terms of precision recall curves, we find that learning with bagged boosted decision trees reduces equal-error error rates for threshold relaxation by 5 to 21 % and strand elimination performance by 18 to 57 %. We benchmark generalization performance across datasets; while improvements vary between data sets, learning always leads to a useful reduction in error rates. Overall, learning is shown to more than halve the total error rate, and therefore, human time spent manually correcting such vectorizations. PMID:22854035
Reliability of drivers in urban intersections.
Gstalter, Herbert; Fastenmeier, Wolfgang
2010-01-01
The concept of human reliability has been widely used in industrial settings by human factors experts to optimise the person-task fit. Reliability is estimated by the probability that a task will successfully be completed by personnel in a given stage of system operation. Human Reliability Analysis (HRA) is a technique used to calculate human error probabilities as the ratio of errors committed to the number of opportunities for that error. To transfer this notion to the measurement of car driver reliability the following components are necessary: a taxonomy of driving tasks, a definition of correct behaviour in each of these tasks, a list of errors as deviations from the correct actions and an adequate observation method to register errors and opportunities for these errors. Use of the SAFE-task analysis procedure recently made it possible to derive driver errors directly from the normative analysis of behavioural requirements. Driver reliability estimates could be used to compare groups of tasks (e.g. different types of intersections with their respective regulations) as well as groups of drivers' or individual drivers' aptitudes. This approach was tested in a field study with 62 drivers of different age groups. The subjects drove an instrumented car and had to complete an urban test route, the main features of which were 18 intersections representing six different driving tasks. The subjects were accompanied by two trained observers who recorded driver errors using standardized observation sheets. Results indicate that error indices often vary between both the age group of drivers and the type of driving task. The highest error indices occurred in the non-signalised intersection tasks and the roundabout, which exactly equals the corresponding ratings of task complexity from the SAFE analysis. A comparison of age groups clearly shows the disadvantage of older drivers, whose error indices in nearly all tasks are significantly higher than those of the other groups. The vast majority of these errors could be explained by high task load in the intersections, as they represent difficult tasks. The discussion shows how reliability estimates can be used in a constructive way to propose changes in car design, intersection layout and regulation as well as driver training.
IMU-Based Gait Recognition Using Convolutional Neural Networks and Multi-Sensor Fusion.
Dehzangi, Omid; Taherisadr, Mojtaba; ChangalVala, Raghvendar
2017-11-27
The wide spread usage of wearable sensors such as in smart watches has provided continuous access to valuable user generated data such as human motion that could be used to identify an individual based on his/her motion patterns such as, gait. Several methods have been suggested to extract various heuristic and high-level features from gait motion data to identify discriminative gait signatures and distinguish the target individual from others. However, the manual and hand crafted feature extraction is error prone and subjective. Furthermore, the motion data collected from inertial sensors have complex structure and the detachment between manual feature extraction module and the predictive learning models might limit the generalization capabilities. In this paper, we propose a novel approach for human gait identification using time-frequency (TF) expansion of human gait cycles in order to capture joint 2 dimensional (2D) spectral and temporal patterns of gait cycles. Then, we design a deep convolutional neural network (DCNN) learning to extract discriminative features from the 2D expanded gait cycles and jointly optimize the identification model and the spectro-temporal features in a discriminative fashion. We collect raw motion data from five inertial sensors placed at the chest, lower-back, right hand wrist, right knee, and right ankle of each human subject synchronously in order to investigate the impact of sensor location on the gait identification performance. We then present two methods for early (input level) and late (decision score level) multi-sensor fusion to improve the gait identification generalization performance. We specifically propose the minimum error score fusion (MESF) method that discriminatively learns the linear fusion weights of individual DCNN scores at the decision level by minimizing the error rate on the training data in an iterative manner. 10 subjects participated in this study and hence, the problem is a 10-class identification task. Based on our experimental results, 91% subject identification accuracy was achieved using the best individual IMU and 2DTF-DCNN. We then investigated our proposed early and late sensor fusion approaches, which improved the gait identification accuracy of the system to 93.36% and 97.06%, respectively.
Performance of concatenated Reed-Solomon/Viterbi channel coding
NASA Technical Reports Server (NTRS)
Divsalar, D.; Yuen, J. H.
1982-01-01
The concatenated Reed-Solomon (RS)/Viterbi coding system is reviewed. The performance of the system is analyzed and results are derived with a new simple approach. A functional model for the input RS symbol error probability is presented. Based on this new functional model, we compute the performance of a concatenated system in terms of RS word error probability, output RS symbol error probability, bit error probability due to decoding failure, and bit error probability due to decoding error. Finally we analyze the effects of the noisy carrier reference and the slow fading on the system performance.
Hamm, Jordan P.; Dyckman, Kara A.; McDowell, Jennifer E.; Clementz, Brett A.
2012-01-01
Cognitive control is required for correct performance on antisaccade tasks, including the ability to inhibit an externally driven ocular motor repsonse (a saccade to a peripheral stimulus) in favor of an internally driven ocular motor goal (a saccade directed away from a peripheral stimulus). Healthy humans occasionally produce errors during antisaccade tasks, but the mechanisms associated with such failures of cognitive control are uncertain. Most research on cognitive control failures focuses on post-stimulus processing, although a growing body of literature highlights a role of intrinsic brain activity in perceptual and cognitive performance. The current investigation used dense array electroencephalography and distributed source analyses to examine brain oscillations across a wide frequency bandwidth in the period prior to antisaccade cue onset. Results highlight four important aspects of ongoing and preparatory brain activations that differentiate error from correct antisaccade trials: (i) ongoing oscillatory beta (20–30Hz) power in anterior cingulate prior to trial initiation (lower for error trials), (ii) instantaneous phase of ongoing alpha-theta (7Hz) in frontal and occipital cortices immediately before trial initiation (opposite between trial types), (iii) gamma power (35–60Hz) in posterior parietal cortex 100 ms prior to cue onset (greater for error trials), and (iv) phase locking of alpha (5–12Hz) in parietal and occipital cortices immediately prior to cue onset (lower for error trials). These findings extend recently reported effects of pre-trial alpha phase on perception to cognitive control processes, and help identify the cortical generators of such phase effects. PMID:22593071
Pratte, Michael S.; Park, Young Eun; Rademaker, Rosanne L.; Tong, Frank
2016-01-01
If we view a visual scene that contains many objects, then momentarily close our eyes, some details persist while others seem to fade. Discrete models of visual working memory (VWM) assume that only a few items can be actively maintained in memory, beyond which pure guessing will emerge. Alternatively, continuous resource models assume that all items in a visual scene can be stored with some precision. Distinguishing between these competing models is challenging, however, as resource models that allow for stochastically variable precision (across items and trials) can produce error distributions that resemble random guessing behavior. Here, we evaluated the hypothesis that a major source of variability in VWM performance arises from systematic variation in precision across the stimuli themselves; such stimulus-specific variability can be incorporated into both discrete-capacity and variable-precision resource models. Participants viewed multiple oriented gratings, and then reported the orientation of a cued grating from memory. When modeling the overall distribution of VWM errors, we found that the variable-precision resource model outperformed the discrete model. However, VWM errors revealed a pronounced “oblique effect”, with larger errors for oblique than cardinal orientations. After this source of variability was incorporated into both models, we found that the discrete model provided a better account of VWM errors. Our results demonstrate that variable precision across the stimulus space can lead to an unwarranted advantage for resource models that assume stochastically variable precision. When these deterministic sources are adequately modeled, human working memory performance reveals evidence of a discrete capacity limit. PMID:28004957
Pratte, Michael S; Park, Young Eun; Rademaker, Rosanne L; Tong, Frank
2017-01-01
If we view a visual scene that contains many objects, then momentarily close our eyes, some details persist while others seem to fade. Discrete models of visual working memory (VWM) assume that only a few items can be actively maintained in memory, beyond which pure guessing will emerge. Alternatively, continuous resource models assume that all items in a visual scene can be stored with some precision. Distinguishing between these competing models is challenging, however, as resource models that allow for stochastically variable precision (across items and trials) can produce error distributions that resemble random guessing behavior. Here, we evaluated the hypothesis that a major source of variability in VWM performance arises from systematic variation in precision across the stimuli themselves; such stimulus-specific variability can be incorporated into both discrete-capacity and variable-precision resource models. Participants viewed multiple oriented gratings, and then reported the orientation of a cued grating from memory. When modeling the overall distribution of VWM errors, we found that the variable-precision resource model outperformed the discrete model. However, VWM errors revealed a pronounced "oblique effect," with larger errors for oblique than cardinal orientations. After this source of variability was incorporated into both models, we found that the discrete model provided a better account of VWM errors. Our results demonstrate that variable precision across the stimulus space can lead to an unwarranted advantage for resource models that assume stochastically variable precision. When these deterministic sources are adequately modeled, human working memory performance reveals evidence of a discrete capacity limit. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Voisin, Sophie; Pinto, Frank M; Morin-Ducote, Garnetta
2013-01-01
Purpose: The primary aim of the present study was to test the feasibility of predicting diagnostic errors in mammography by merging radiologists gaze behavior and image characteristics. A secondary aim was to investigate group-based and personalized predictive models for radiologists of variable experience levels. Methods: The study was performed for the clinical task of assessing the likelihood of malignancy of mammographic masses. Eye-tracking data and diagnostic decisions for 40 cases were acquired from 4 Radiology residents and 2 breast imaging experts as part of an IRB-approved pilot study. Gaze behavior features were extracted from the eye-tracking data. Computer-generated and BIRADsmore » images features were extracted from the images. Finally, machine learning algorithms were used to merge gaze and image features for predicting human error. Feature selection was thoroughly explored to determine the relative contribution of the various features. Group-based and personalized user modeling was also investigated. Results: Diagnostic error can be predicted reliably by merging gaze behavior characteristics from the radiologist and textural characteristics from the image under review. Leveraging data collected from multiple readers produced a reasonable group model (AUC=0.79). Personalized user modeling was far more accurate for the more experienced readers (average AUC of 0.837 0.029) than for the less experienced ones (average AUC of 0.667 0.099). The best performing group-based and personalized predictive models involved combinations of both gaze and image features. Conclusions: Diagnostic errors in mammography can be predicted reliably by leveraging the radiologists gaze behavior and image content.« less
Quantitative susceptibility mapping of human brain at 3T: a multisite reproducibility study.
Lin, P-Y; Chao, T-C; Wu, M-L
2015-03-01
Quantitative susceptibility mapping of the human brain has demonstrated strong potential in examining iron deposition, which may help in investigating possible brain pathology. This study assesses the reproducibility of quantitative susceptibility mapping across different imaging sites. In this study, the susceptibility values of 5 regions of interest in the human brain were measured on 9 healthy subjects following calibration by using phantom experiments. Each of the subjects was imaged 5 times on 1 scanner with the same procedure repeated on 3 different 3T systems so that both within-site and cross-site quantitative susceptibility mapping precision levels could be assessed. Two quantitative susceptibility mapping algorithms, similar in principle, one by using iterative regularization (iterative quantitative susceptibility mapping) and the other with analytic optimal solutions (deterministic quantitative susceptibility mapping), were implemented, and their performances were compared. Results show that while deterministic quantitative susceptibility mapping had nearly 700 times faster computation speed, residual streaking artifacts seem to be more prominent compared with iterative quantitative susceptibility mapping. With quantitative susceptibility mapping, the putamen, globus pallidus, and caudate nucleus showed smaller imprecision on the order of 0.005 ppm, whereas the red nucleus and substantia nigra, closer to the skull base, had a somewhat larger imprecision of approximately 0.01 ppm. Cross-site errors were not significantly larger than within-site errors. Possible sources of estimation errors are discussed. The reproducibility of quantitative susceptibility mapping in the human brain in vivo is regionally dependent, and the precision levels achieved with quantitative susceptibility mapping should allow longitudinal and multisite studies such as aging-related changes in brain tissue magnetic susceptibility. © 2015 by American Journal of Neuroradiology.
Safety coaches in radiology: decreasing human error and minimizing patient harm.
Dickerson, Julie M; Koch, Bernadette L; Adams, Janet M; Goodfriend, Martha A; Donnelly, Lane F
2010-09-01
Successful programs to improve patient safety require a component aimed at improving safety culture and environment, resulting in a reduced number of human errors that could lead to patient harm. Safety coaching provides peer accountability. It involves observing for safety behaviors and use of error prevention techniques and provides immediate feedback. For more than a decade, behavior-based safety coaching has been a successful strategy for reducing error within the context of occupational safety in industry. We describe the use of safety coaches in radiology. Safety coaches are an important component of our comprehensive patient safety program.
Error-associated behaviors and error rates for robotic geology
NASA Technical Reports Server (NTRS)
Anderson, Robert C.; Thomas, Geb; Wagner, Jacob; Glasgow, Justin
2004-01-01
This study explores human error as a function of the decision-making process. One of many models for human decision-making is Rasmussen's decision ladder [9]. The decision ladder identifies the multiple tasks and states of knowledge involved in decision-making. The tasks and states of knowledge can be classified by the level of cognitive effort required to make the decision, leading to the skill, rule, and knowledge taxonomy (Rasmussen, 1987). Skill based decisions require the least cognitive effort and knowledge based decisions require the greatest cognitive effort. Errors can occur at any of the cognitive levels.
Mahony, Mary C; Patterson, Patricia; Hayward, Brooke; North, Robert; Green, Dawne
2015-05-01
To demonstrate, using human factors engineering (HFE), that a redesigned, pre-filled, ready-to-use, pre-asembled follitropin alfa pen can be used to administer prescribed follitropin alfa doses safely and accurately. A failure modes and effects analysis identified hazards and harms potentially caused by use errors; risk-control measures were implemented to ensure acceptable device use risk management. Participants were women with infertility, their significant others, and fertility nurse (FN) professionals. Preliminary testing included 'Instructions for Use' (IFU) and pre-validation studies. Validation studies used simulated injections in a representative use environment; participants received prior training on pen use. User performance in preliminary testing led to IFU revisions and a change to outer needle cap design to mitigate needle stick potential. In the first validation study (49 users, 343 simulated injections), in the FN group, one observed critical use error resulted in a device design modification and another in an IFU change. A second validation study tested the mitigation strategies; previously reported use errors were not repeated. Through an iterative process involving a series of studies, modifications were made to the pen design and IFU. Simulated-use testing demonstrated that the redesigned pen can be used to administer follitropin alfa effectively and safely.
Atomic force microscopy analysis of human cornea surface after UV (λ=266 nm) laser irradiation
NASA Astrophysics Data System (ADS)
Spyratou, E.; Makropoulou, M.; Moutsouris, K.; Bacharis, C.; Serafetinides, A. A.
2009-07-01
Efficient cornea reshaping by laser irradiation for correcting refractive errors is still a major issue of interest and study. Although the excimer laser wavelength of 193 nm is generally recognized as successful in ablating corneal tissue for myopia correction, complications in excimer refractive surgery leads to alternative laser sources and methods for efficient cornea treatment. In this work, ablation experiments of human donor cornea flaps were conducted with the 4th harmonic of an Nd:YAG laser, with different laser pulses. AFM analysis was performed for examination of the ablated cornea flap morphology and surface roughness.
The development of performance-monitoring function in the posterior medial frontal cortex
Fitzgerald, Kate Dimond; Perkins, Suzanne C.; Angstadt, Mike; Johnson, Timothy; Stern, Emily R.; Welsh, Robert C.; Taylor, Stephan F.
2009-01-01
Background Despite its critical role in performance-monitoring, the development of posterior medial prefrontal cortex (pMFC) in goal-directed behaviors remains poorly understood. Performance monitoring depends on distinct, but related functions that may differentially activate the pMFC, such as monitoring response conflict and detecting errors. Developmental differences in conflict- and error-related activations, coupled with age-related changes in behavioral performance, may confound attempts to map the maturation of pMFC functions. To characterize the development of pMFC-based performance monitoring functions, we segregated interference and error-processing, while statistically controlling for performance. Methods Twenty-one adults and 23 youth performed an event-related version of the Multi-Source Interference Task during functional magnetic resonance imaging (fMRI). Linear modeling of interference and error contrast estimates derived from the pMFC were regressed on age, while covarying for performance. Results Interference- and error-processing were associated with robust activation of the pMFC in both youth and adults. Among youth, interference- and error-related activation of the pMFC increased with age, independent of performance. Greater accuracy associated with greater pMFC activity during error commission in both groups. Discussion Increasing pMFC response to interference and errors occurs with age, likely contributing to the improvement of performance monitoring capacity during development. PMID:19913101
A circadian rhythm in skill-based errors in aviation maintenance.
Hobbs, Alan; Williamson, Ann; Van Dongen, Hans P A
2010-07-01
In workplaces where activity continues around the clock, human error has been observed to exhibit a circadian rhythm, with a characteristic peak in the early hours of the morning. Errors are commonly distinguished by the nature of the underlying cognitive failure, particularly the level of intentionality involved in the erroneous action. The Skill-Rule-Knowledge (SRK) framework of Rasmussen is used widely in the study of industrial errors and accidents. The SRK framework describes three fundamental types of error, according to whether behavior is under the control of practiced sensori-motor skill routines with minimal conscious awareness; is guided by implicit or explicit rules or expertise; or where the planning of actions requires the conscious application of domain knowledge. Up to now, examinations of circadian patterns of industrial errors have not distinguished between different types of error. Consequently, it is not clear whether all types of error exhibit the same circadian rhythm. A survey was distributed to aircraft maintenance personnel in Australia. Personnel were invited to anonymously report a safety incident and were prompted to describe, in detail, the human involvement (if any) that contributed to it. A total of 402 airline maintenance personnel reported an incident, providing 369 descriptions of human error in which the time of the incident was reported and sufficient detail was available to analyze the error. Errors were categorized using a modified version of the SRK framework, in which errors are categorized as skill-based, rule-based, or knowledge-based, or as procedure violations. An independent check confirmed that the SRK framework had been applied with sufficient consistency and reliability. Skill-based errors were the most common form of error, followed by procedure violations, rule-based errors, and knowledge-based errors. The frequency of errors was adjusted for the estimated proportion of workers present at work/each hour of the day, and the 24 h pattern of each error type was examined. Skill-based errors exhibited a significant circadian rhythm, being most prevalent in the early hours of the morning. Variation in the frequency of rule-based errors, knowledge-based errors, and procedure violations over the 24 h did not reach statistical significance. The results suggest that during the early hours of the morning, maintenance technicians are at heightened risk of "absent minded" errors involving failures to execute action plans as intended.
2012-01-01
Background Humans are capable of fast adaptation to new unknown dynamics that affect their movements. Such motor learning is also believed to be an important part of motor rehabilitation. Bimanual training can improve post-stroke rehabilitation outcome and is associated with interlimb coordination between both limbs. Some studies indicate partial transfer of skills among limbs of healthy individuals. Another aspect of bimanual training is the (a)symmetry of bimanual movements and how these affect motor learning and possibly post-stroke rehabilitation. Methods A novel bimanual 2-DOF robotic system was used for both bimanual and unimanual reaching movements. 35 young healthy adults participated in the study. They were divided into 5 test groups that performed movements under different conditions (bimanual or unimanual movements and symmetric or asymmetric bimanual arm loads). The subjects performed a simple tracking exercise with the bimanual system. The exercise was developed to stimulate motor learning by applying a velocity-dependent disturbance torque to the handlebar. Each subject performed 255 trials divided into three phases: baseline without disturbance torque, training phase with disturbance torque and evaluation phase with disturbance torque. Results Performance was assessed with the maximal values of rotation errors of the handlebar. After exposure to disturbance torque, the errors decreased for both unimanual and bimanual training. Errors in unimanual evaluation following the bimanual training phase were not significantly different from errors in unimanual evaluation following unimanual training. There was no difference in performance following symmetric or asymmetric training. Changing the arm force symmetry during bimanual movements from asymmetric to symmetric had little influence on performance. Conclusions Subjects could adapt to an unknown disturbance torque that was changing the dynamics of the movements. The learning effect was present during both unimanual and bimanual training. Transfer of learned skills from bimanual training to unimanual movements was also observed, as bimanual training also improved single limb performance with the dominant arm. Changes of force symmetry did not have an effect on motor learning. As motor learning is believed to be an important mechanism of rehabilitation, our findings could be tested for future post-stroke rehabilitation systems. PMID:22805223
Double-pass measurement of human eye aberrations: limitations and practical realization
NASA Astrophysics Data System (ADS)
Letfullin, Renat R.; Belyakov, Alexey I.; Cherezova, Tatyana Y.; Kudryashov, Alexis V.
2004-11-01
The problem of correct eye aberrations measurement is very important with the rising widespread of a surgical procedure for reducing refractive error in the eye, so called, LASIK (laser-assisted in situ keratomileusis). The double-pass technique commonly used for measuring aberrations of a human eye involves some uncertainties. One of them is loosing the information about odd human eye aberrations. We report about investigations of the applicability limit of the double-pass measurements depending upon the aberrations status introduced by human eye and actual size of the entrance pupil. We evaluate the double-pass effects for various aberrations and different pupil diameters. It is shown that for small pupils the double-pass effects are negligible. The testing and alignment of aberrometer was performed using the schematic eye, developed in our lab. We also introduced a model of human eye based on bimorph flexible mirror. We perform calculations to demonstrate that our schematic eye is capable of reproducing spatial-temporal statistics of aberrations of living eye with normal vision or even myopic or hypermetropic or with high aberrations ones.
Delong, Caroline M; Au, Whitlow W L; Harley, Heidi E; Roitblat, Herbert L; Pytka, Lisa
2007-08-01
Echolocating bottlenose dolphins (Tursiops truncatus) discriminate between objects on the basis of the echoes reflected by the objects. However, it is not clear which echo features are important for object discrimination. To gain insight into the salient features, the authors had a dolphin perform a match-to-sample task and then presented human listeners with echoes from the same objects used in the dolphin's task. In 2 experiments, human listeners performed as well or better than the dolphin at discriminating objects, and they reported the salient acoustic cues. The error patterns of the humans and the dolphin were compared to determine which acoustic features were likely to have been used by the dolphin. The results indicate that the dolphin did not appear to use overall echo amplitude, but that it attended to the pattern of changes in the echoes across different object orientations. Human listeners can quickly identify salient combinations of echo features that permit object discrimination, which can be used to generate hypotheses that can be tested using dolphins as subjects.
Error quantification of abnormal extreme high waves in Operational Oceanographic System in Korea
NASA Astrophysics Data System (ADS)
Jeong, Sang-Hun; Kim, Jinah; Heo, Ki-Young; Park, Kwang-Soon
2017-04-01
In winter season, large-height swell-like waves have occurred on the East coast of Korea, causing property damages and loss of human life. It is known that those waves are generated by a local strong wind made by temperate cyclone moving to eastward in the East Sea of Korean peninsula. Because the waves are often occurred in the clear weather, in particular, the damages are to be maximized. Therefore, it is necessary to predict and forecast large-height swell-like waves to prevent and correspond to the coastal damages. In Korea, an operational oceanographic system (KOOS) has been developed by the Korea institute of ocean science and technology (KIOST) and KOOS provides daily basis 72-hours' ocean forecasts such as wind, water elevation, sea currents, water temperature, salinity, and waves which are computed from not only meteorological and hydrodynamic model (WRF, ROMS, MOM, and MOHID) but also wave models (WW-III and SWAN). In order to evaluate the model performance and guarantee a certain level of accuracy of ocean forecasts, a Skill Assessment (SA) system was established as a one of module in KOOS. It has been performed through comparison of model results with in-situ observation data and model errors have been quantified with skill scores. Statistics which are used in skill assessment are including a measure of both errors and correlations such as root-mean-square-error (RMSE), root-mean-square-error percentage (RMSE%), mean bias (MB), correlation coefficient (R), scatter index (SI), circular correlation (CC) and central frequency (CF) that is a frequency with which errors lie within acceptable error criteria. It should be utilized simultaneously not only to quantify an error but also to improve an accuracy of forecasts by providing a feedback interactively. However, in an abnormal phenomena such as high-height swell-like waves in the East coast of Korea, it requires more advanced and optimized error quantification method that allows to predict the abnormal waves well and to improve the accuracy of forecasts by supporting modification of physics and numeric on numerical models through sensitivity test. In this study, we proposed an appropriate method of error quantification especially on abnormal high waves which are occurred by local weather condition. Furthermore, we introduced that how the quantification errors are contributed to improve wind-wave modeling by applying data assimilation and utilizing reanalysis data.
Spatial durbin error model for human development index in Province of Central Java.
NASA Astrophysics Data System (ADS)
Septiawan, A. R.; Handajani, S. S.; Martini, T. S.
2018-05-01
The Human Development Index (HDI) is an indicator used to measure success in building the quality of human life, explaining how people access development outcomes when earning income, health and education. Every year HDI in Central Java has improved to a better direction. In 2016, HDI in Central Java was 69.98 %, an increase of 0.49 % over the previous year. The objective of this study was to apply the spatial Durbin error model using angle weights queen contiguity to measure HDI in Central Java Province. Spatial Durbin error model is used because the model overcomes the spatial effect of errors and the effects of spatial depedency on the independent variable. Factors there use is life expectancy, mean years of schooling, expected years of schooling, and purchasing power parity. Based on the result of research, we get spatial Durbin error model for HDI in Central Java with influencing factors are life expectancy, mean years of schooling, expected years of schooling, and purchasing power parity.
Human-Robot Interaction Directed Research Project
NASA Technical Reports Server (NTRS)
Sandor, Aniko; Cross, Ernest V., II; Chang, M. L.
2014-01-01
Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces and command modalities affect the human's ability to perform tasks accurately, efficiently, and effectively when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. This DRP concentrates on three areas associated with interfaces and command modalities in HRI which are applicable to NASA robot systems: 1) Video Overlays, 2) Camera Views, and 3) Command Modalities. The first study focused on video overlays that investigated how Augmented Reality (AR) symbology can be added to the human-robot interface to improve teleoperation performance. Three types of AR symbology were explored in this study, command guidance (CG), situation guidance (SG), and both (SCG). CG symbology gives operators explicit instructions on what commands to input, whereas SG symbology gives operators implicit cues so that operators can infer the input commands. The combination of CG and SG provided operators with explicit and implicit cues allowing the operator to choose which symbology to utilize. The objective of the study was to understand how AR symbology affects the human operator's ability to align a robot arm to a target using a flight stick and the ability to allocate attention between the symbology and external views of the world. The study evaluated the effects type of symbology (CG and SG) has on operator tasks performance and attention allocation during teleoperation of a robot arm. The second study expanded on the first study by evaluating the effects of the type of navigational guidance (CG and SG) on operator task performance and attention allocation during teleoperation of a robot arm through uplinked commands. Although this study complements the first study on navigational guidance with hand controllers, it is a separate investigation due to the distinction in intended operators (i.e., crewmembers versus ground-operators). A third study looked at superimposed and integrated overlays for teleoperation of a mobile robot using a hand controller. When AR is superimposed on the external world, it appears to be fixed onto the display and internal to the operators' workstation. Unlike superimposed overlays, integrated overlays often appear as three-dimensional objects and move as if part of the external world. Studies conducted in the aviation domain show that integrated overlays can improve situation awareness and reduce the amount of deviation from the optimal path. The purpose of the study was to investigate whether these results apply to HRI tasks, such as navigation with a mobile robot. HRP GAPS This HRI research contributes to closure of HRP gaps by providing information on how display and control characteristics - those related to guidance, feedback, and command modalities - affect operator performance. The overarching goals are to improve interface usability, reduce operator error, and develop candidate guidelines to design effective human-robot interfaces.
Sollmann, Nico; Tanigawa, Noriko; Tussis, Lorena; Hauck, Theresa; Ille, Sebastian; Maurer, Stefanie; Negwer, Chiara; Zimmer, Claus; Ringel, Florian; Meyer, Bernhard; Krieg, Sandro M
2015-04-01
Knowledge about the cortical representation of semantic processing is mainly derived from functional magnetic resonance imaging (fMRI) or direct cortical stimulation (DCS) studies. Because DCS is regarded as the gold standard in terms of language mapping but can only be used during awake surgery due to its invasive character, repetitive navigated transcranial magnetic stimulation (rTMS)—a non-invasive modality that uses a similar technique as DCS—seems highly feasible for use in the investigation of semantic processing in the healthy human brain. A total number of 100 (50 left-hemispheric and 50 right-hemispheric) rTMS-based language mappings were performed in 50 purely right-handed, healthy volunteers during an object-naming task. All rTMS-induced semantic naming errors were then counted and evaluated systematically. Furthermore, since the distribution of stimulations within both hemispheres varied between individuals and cortical regions stimulated, all elicited errors were standardized and subsequently related to their cortical sites by projecting the mapping results into the cortical parcellation system (CPS). Overall, the most left-hemispheric semantic errors were observed after targeting the rTMS to the posterior middle frontal gyrus (pMFG; standardized error rate: 7.3‰), anterior supramarginal gyrus (aSMG; 5.6‰), and ventral postcentral gyrus (vPoG; 5.0‰). In contrast to that, the highest right-hemispheric error rates occurred after stimulation of the posterior superior temporal gyrus (pSTG; 12.4‰), middle superior temporal gyrus (mSTG; 6.2‰), and anterior supramarginal gyrus (aSMG; 6.2‰). Although error rates were low, the rTMS-based approach of investigating semantic processing during object naming shows convincing results compared to the current literature. Therefore, rTMS seems a valuable, safe, and reliable tool for the investigation of semantic processing within the healthy human brain. Copyright © 2015 Elsevier Ltd. All rights reserved.
Observing human movements helps decoding environmental forces.
Zago, Myrka; La Scaleia, Barbara; Miller, William L; Lacquaniti, Francesco
2011-11-01
Vision of human actions can affect several features of visual motion processing, as well as the motor responses of the observer. Here, we tested the hypothesis that action observation helps decoding environmental forces during the interception of a decelerating target within a brief time window, a task intrinsically very difficult. We employed a factorial design to evaluate the effects of scene orientation (normal or inverted) and target gravity (normal or inverted). Button-press triggered the motion of a bullet, a piston, or a human arm. We found that the timing errors were smaller for upright scenes irrespective of gravity direction in the Bullet group, while the errors were smaller for the standard condition of normal scene and gravity in the Piston group. In the Arm group, instead, performance was better when the directions of scene and target gravity were concordant, irrespective of whether both were upright or inverted. These results suggest that the default viewer-centered reference frame is used with inanimate scenes, such as those of the Bullet and Piston protocols. Instead, the presence of biological movements in animate scenes (as in the Arm protocol) may help processing target kinematics under the ecological conditions of coherence between scene and target gravity directions.
Residents' numeric inputting error in computerized physician order entry prescription.
Wu, Xue; Wu, Changxu; Zhang, Kan; Wei, Dong
2016-04-01
Computerized physician order entry (CPOE) system with embedded clinical decision support (CDS) can significantly reduce certain types of prescription error. However, prescription errors still occur. Various factors such as the numeric inputting methods in human computer interaction (HCI) produce different error rates and types, but has received relatively little attention. This study aimed to examine the effects of numeric inputting methods and urgency levels on numeric inputting errors of prescription, as well as categorize the types of errors. Thirty residents participated in four prescribing tasks in which two factors were manipulated: numeric inputting methods (numeric row in the main keyboard vs. numeric keypad) and urgency levels (urgent situation vs. non-urgent situation). Multiple aspects of participants' prescribing behavior were measured in sober prescribing situations. The results revealed that in urgent situations, participants were prone to make mistakes when using the numeric row in the main keyboard. With control of performance in the sober prescribing situation, the effects of the input methods disappeared, and urgency was found to play a significant role in the generalized linear model. Most errors were either omission or substitution types, but the proportion of transposition and intrusion error types were significantly higher than that of the previous research. Among numbers 3, 8, and 9, which were the less common digits used in prescription, the error rate was higher, which was a great risk to patient safety. Urgency played a more important role in CPOE numeric typing error-making than typing skills and typing habits. It was recommended that inputting with the numeric keypad had lower error rates in urgent situation. An alternative design could consider increasing the sensitivity of the keys with lower frequency of occurrence and decimals. To improve the usability of CPOE, numeric keyboard design and error detection could benefit from spatial incidence of errors found in this study. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Mental representation of symbols as revealed by vocabulary errors in two bonobos (Pan paniscus).
Lyn, Heidi
2007-10-01
Error analysis has been used in humans to detect implicit representations and categories in language use. The present study utilizes the same technique to report on mental representations and categories in symbol use from two bonobos (Pan paniscus). These bonobos have been shown in published reports to comprehend English at the level of a two-and-a-half year old child and to use a keyboard with over 200 visuographic symbols (lexigrams). In this study, vocabulary test errors from over 10 years of data revealed auditory, visual, and spatio-temporal generalizations (errors were more likely items that looked like sounded like, or were frequently associated with the sample item in space or in time), as well as hierarchical and conceptual categorizations. These error data, like those of humans, are a result of spontaneous responding rather than specific training and do not solely depend upon the sample mode (e.g. auditory similarity errors are not universally more frequent with an English sample, nor were visual similarity errors universally more frequent with a photograph sample). However, unlike humans, these bonobos do not make errors based on syntactical confusions (e.g. confusing semantically unrelated nouns), suggesting that they may not separate syntactical and semantic information. These data suggest that apes spontaneously create a complex, hierarchical, web of representations when exposed to a symbol system.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-15
... management of human error in its operations and system safety programs, and the status of PTC implementation... UP's safety management policies and programs associated with human error, operational accident and... Chairman of the Board of Inquiry 2. Introduction of the Board of Inquiry and Technical Panel 3...
SU-E-J-29: Automatic Image Registration Performance of Three IGRT Systems for Prostate Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barber, J; University of Sydney, Sydney, NSW; Sykes, J
Purpose: To compare the performance of an automatic image registration algorithm on image sets collected on three commercial image guidance systems, and explore its relationship with imaging parameters such as dose and sharpness. Methods: Images of a CIRS Virtually Human Male Pelvis phantom (VHMP) were collected on the CBCT systems of Varian TrueBeam/OBI and Elekta Synergy/XVI linear accelerators, across a range of mAs settings; and MVCT on a Tomotherapy Hi-ART accelerator with a range of pitch. Using the 6D correlation ratio algorithm of XVI, each image was registered to a mask of the prostate volume with a 5 mm expansion.more » Registrations were repeated 100 times, with random initial offsets introduced to simulate daily matching. Residual registration errors were calculated by correcting for the initial phantom set-up error. Automatic registration was also repeated after reconstructing images with different sharpness filters. Results: All three systems showed good registration performance, with residual translations <0.5mm (1σ) for typical clinical dose and reconstruction settings. Residual rotational error had larger range, with 0.8°, 1.2° and 1.9° for 1σ in XVI, OBI and Tomotherapy respectively. The registration accuracy of XVI images showed a strong dependence on imaging dose, particularly below 4mGy. No evidence of reduced performance was observed at the lowest dose settings for OBI and Tomotherapy, but these were above 4mGy. Registration failures (maximum target registration error > 3.6 mm on the surface of a 30mm sphere) occurred in 5% to 10% of registrations. Changing the sharpness of image reconstruction had no significant effect on registration performance. Conclusions: Using the present automatic image registration algorithm, all IGRT systems tested provided satisfactory registrations for clinical use, within a normal range of acquisition settings.« less
Maikusa, Norihide; Yamashita, Fumio; Tanaka, Kenichiro; Abe, Osamu; Kawaguchi, Atsushi; Kabasawa, Hiroyuki; Chiba, Shoma; Kasahara, Akihiro; Kobayashi, Nobuhisa; Yuasa, Tetsuya; Sato, Noriko; Matsuda, Hiroshi; Iwatsubo, Takeshi
2013-06-01
Serial magnetic resonance imaging (MRI) images acquired from multisite and multivendor MRI scanners are widely used in measuring longitudinal structural changes in the brain. Precise and accurate measurements are important in understanding the natural progression of neurodegenerative disorders such as Alzheimer's disease. However, geometric distortions in MRI images decrease the accuracy and precision of volumetric or morphometric measurements. To solve this problem, the authors suggest a commercially available phantom-based distortion correction method that accommodates the variation in geometric distortion within MRI images obtained with multivendor MRI scanners. The authors' method is based on image warping using a polynomial function. The method detects fiducial points within a phantom image using phantom analysis software developed by the Mayo Clinic and calculates warping functions for distortion correction. To quantify the effectiveness of the authors' method, the authors corrected phantom images obtained from multivendor MRI scanners and calculated the root-mean-square (RMS) of fiducial errors and the circularity ratio as evaluation values. The authors also compared the performance of the authors' method with that of a distortion correction method based on a spherical harmonics description of the generic gradient design parameters. Moreover, the authors evaluated whether this correction improves the test-retest reproducibility of voxel-based morphometry in human studies. A Wilcoxon signed-rank test with uncorrected and corrected images was performed. The root-mean-square errors and circularity ratios for all slices significantly improved (p < 0.0001) after the authors' distortion correction. Additionally, the authors' method was significantly better than a distortion correction method based on a description of spherical harmonics in improving the distortion of root-mean-square errors (p < 0.001 and 0.0337, respectively). Moreover, the authors' method reduced the RMS error arising from gradient nonlinearity more than gradwarp methods. In human studies, the coefficient of variation of voxel-based morphometry analysis of the whole brain improved significantly from 3.46% to 2.70% after distortion correction of the whole gray matter using the authors' method (Wilcoxon signed-rank test, p < 0.05). The authors proposed a phantom-based distortion correction method to improve reproducibility in longitudinal structural brain analysis using multivendor MRI. The authors evaluated the authors' method for phantom images in terms of two geometrical values and for human images in terms of test-retest reproducibility. The results showed that distortion was corrected significantly using the authors' method. In human studies, the reproducibility of voxel-based morphometry analysis for the whole gray matter significantly improved after distortion correction using the authors' method.
How smart is your BEOL? productivity improvement through intelligent automation
NASA Astrophysics Data System (ADS)
Schulz, Kristian; Egodage, Kokila; Tabbone, Gilles; Garetto, Anthony
2017-07-01
The back end of line (BEOL) workflow in the mask shop still has crucial issues throughout all standard steps which are inspection, disposition, photomask repair and verification of repair success. All involved tools are typically run by highly trained operators or engineers who setup jobs and recipes, execute tasks, analyze data and make decisions based on the results. No matter how experienced operators are and how good the systems perform, there is one aspect that always limits the productivity and effectiveness of the operation: the human aspect. Human errors can range from seemingly rather harmless slip-ups to mistakes with serious and direct economic impact including mask rejects, customer returns and line stops in the wafer fab. Even with the introduction of quality control mechanisms that help to reduce these critical but unavoidable faults, they can never be completely eliminated. Therefore the mask shop BEOL cannot run in the most efficient manner as unnecessary time and money are spent on processes that still remain labor intensive. The best way to address this issue is to automate critical segments of the workflow that are prone to human errors. In fact, manufacturing errors can occur for each BEOL step where operators intervene. These processes comprise of image evaluation, setting up tool recipes, data handling and all other tedious but required steps. With the help of smart solutions, operators can work more efficiently and dedicate their time to less mundane tasks. Smart solutions connect tools, taking over the data handling and analysis typically performed by operators and engineers. These solutions not only eliminate the human error factor in the manufacturing process but can provide benefits in terms of shorter cycle times, reduced bottlenecks and prediction of an optimized workflow. In addition such software solutions consist of building blocks that seamlessly integrate applications and allow the customers to use tailored solutions. To accommodate for the variability and complexity in mask shops today, individual workflows can be supported according to the needs of any particular manufacturing line with respect to necessary measurement and production steps. At the same time the efficiency of assets is increased by avoiding unneeded cycle time and waste of resources due to the presence of process steps that are very crucial for a given technology. In this paper we present details of which areas of the BEOL can benefit most from intelligent automation, what solutions exist and the quantification of benefits to a mask shop with full automation by the use of a back end of line model.
[Scientific ethics of human cloning].
Valenzuela, Carlos Y
2005-01-01
True cloning is fission, budding or other types of asexual reproduction. In humans it occurs in monozygote twinning. This type of cloning is ethically and religiously good. Human cloning can be performed by twinning (TWClo) or nuclear transfer (NTClo). Both methods need a zygote or a nuclear transferred cell, obtained in vitro (IVTec). They are under the IVTec ethics. IVTecs use humans (zygotes, embryos) as drugs or things; increase the risk of malformations; increase development and size of abnormalities and may cause long-term changes. Cloning for preserving extinct (or almost extinct) animals or humans when sexual reproduction is not possible is ethically valid. The previous selection of a phenotype in human cloning violates some ethical principles. NTClo for reproductive or therapeutic purposes is dangerous since it increases the risk for nucleotide or chromosome mutations, de-programming or re-programming errors, aging or malignancy of the embryo cells thus obtained.
Reliability Analysis and Standardization of Spacecraft Command Generation Processes
NASA Technical Reports Server (NTRS)
Meshkat, Leila; Grenander, Sven; Evensen, Ken
2011-01-01
center dot In order to reduce commanding errors that are caused by humans, we create an approach and corresponding artifacts for standardizing the command generation process and conducting risk management during the design and assurance of such processes. center dot The literature review conducted during the standardization process revealed that very few atomic level human activities are associated with even a broad set of missions. center dot Applicable human reliability metrics for performing these atomic level tasks are available. center dot The process for building a "Periodic Table" of Command and Control Functions as well as Probabilistic Risk Assessment (PRA) models is demonstrated. center dot The PRA models are executed using data from human reliability data banks. center dot The Periodic Table is related to the PRA models via Fault Links.
Error framing effects on performance: cognitive, motivational, and affective pathways.
Steele-Johnson, Debra; Kalinoski, Zachary T
2014-01-01
Our purpose was to examine whether positive error framing, that is, making errors salient and cuing individuals to see errors as useful, can benefit learning when task exploration is constrained. Recent research has demonstrated the benefits of a newer approach to training, that is, error management training, that includes the opportunity to actively explore the task and framing errors as beneficial to learning complex tasks (Keith & Frese, 2008). Other research has highlighted the important role of errors in on-the-job learning in complex domains (Hutchins, 1995). Participants (N = 168) from a large undergraduate university performed a class scheduling task. Results provided support for a hypothesized path model in which error framing influenced cognitive, motivational, and affective factors which in turn differentially affected performance quantity and quality. Within this model, error framing had significant direct effects on metacognition and self-efficacy. Our results suggest that positive error framing can have beneficial effects even when tasks cannot be structured to support extensive exploration. Whereas future research can expand our understanding of error framing effects on outcomes, results from the current study suggest that positive error framing can facilitate learning from errors in real-time performance of tasks.