Sample records for human errors led

  1. Latent error detection: A golden two hours for detection.

    PubMed

    Saward, Justin R E; Stanton, Neville A

    2017-03-01

    Undetected error in safety critical contexts generates a latent condition that can contribute to a future safety failure. The detection of latent errors post-task completion is observed in naval air engineers using a diary to record work-related latent error detection (LED) events. A systems view is combined with multi-process theories to explore sociotechnical factors associated with LED. Perception of cues in different environments facilitates successful LED, for which the deliberate review of past tasks within two hours of the error occurring and whilst remaining in the same or similar sociotechnical environment to that which the error occurred appears most effective. Identified ergonomic interventions offer potential mitigation for latent errors; particularly in simple everyday habitual tasks. It is thought safety critical organisations should look to engineer further resilience through the application of LED techniques that engage with system cues across the entire sociotechnical environment, rather than relying on consistent human performance. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  2. Airline Crew Training

    NASA Technical Reports Server (NTRS)

    1989-01-01

    The discovery that human error has caused many more airline crashes than mechanical malfunctions led to an increased emphasis on teamwork and coordination in airline flight training programs. Human factors research at Ames Research Center has produced two crew training programs directed toward more effective operations. Cockpit Resource Management (CRM) defines areas like decision making, workload distribution, communication skills, etc. as essential in addressing human error problems. In 1979, a workshop led to the implementation of the CRM program by United Airlines, and later other airlines. In Line Oriented Flight Training (LOFT), crews fly missions in realistic simulators while instructors induce emergency situations requiring crew coordination. This is followed by a self critique. Ames Research Center continues its involvement with these programs.

  3. Human error and human factors engineering in health care.

    PubMed

    Welch, D L

    1997-01-01

    Human error is inevitable. It happens in health care systems as it does in all other complex systems, and no measure of attention, training, dedication, or punishment is going to stop it. The discipline of human factors engineering (HFE) has been dealing with the causes and effects of human error since the 1940's. Originally applied to the design of increasingly complex military aircraft cockpits, HFE has since been effectively applied to the problem of human error in such diverse systems as nuclear power plants, NASA spacecraft, the process control industry, and computer software. Today the health care industry is becoming aware of the costs of human error and is turning to HFE for answers. Just as early experimental psychologists went beyond the label of "pilot error" to explain how the design of cockpits led to air crashes, today's HFE specialists are assisting the health care industry in identifying the causes of significant human errors in medicine and developing ways to eliminate or ameliorate them. This series of articles will explore the nature of human error and how HFE can be applied to reduce the likelihood of errors and mitigate their effects.

  4. Common medial frontal mechanisms of adaptive control in humans and rodents

    PubMed Central

    Frank, Michael J.; Laubach, Mark

    2013-01-01

    In this report, we describe how common brain networks within the medial frontal cortex facilitate adaptive behavioral control in rodents and humans. We demonstrate that low frequency oscillations below 12 Hz are dramatically modulated after errors in humans over mid-frontal cortex and in rats within prelimbic and anterior cingulate regions of medial frontal cortex. These oscillations were phase-locked between medial frontal cortex and motor areas in both rats and humans. In rats, single neurons that encoded prior behavioral outcomes were phase-coherent with low-frequency field oscillations particularly after errors. Inactivating medial frontal regions in rats led to impaired behavioral adjustments after errors, eliminated the differential expression of low frequency oscillations after errors, and increased low-frequency spike-field coupling within motor cortex. Our results describe a novel mechanism for behavioral adaptation via low-frequency oscillations and elucidate how medial frontal networks synchronize brain activity to guide performance. PMID:24141310

  5. Critical incidents during prehospital cardiopulmonary resuscitation: what are the problems nobody wants to talk about?

    PubMed

    Hohenstein, Christian; Rupp, Peter; Fleischmann, Thomas

    2011-02-01

    We wanted to identify incidents that led or could have led to patient harm during prehospital cardiopulmonary resuscitation. A nationwide anonymous and Internet-based critical incident reporting system gave the data. During a 4-year period we received 548 reports of which 74 occurred during cardiopulmonary resuscitation. Human error was responsible for 85% of the incidents, whereas equipment failure contributed to 15% of the reports. Equipment failure was considered to be preventable in 61% of all the cases, whereas incidents because of human error could have been prevented in almost all the cases. In most cases, prevention can be accomplished by simple strategies with the Poka-Yoke technique. Insufficient training of emergency medical service physicians in Germany requires special attention. The critical incident reports raise concerns regarding the level of expertize provided by emergency medical service doctors.

  6. Understanding reliance on automation: effects of error type, error distribution, age and experience

    PubMed Central

    Sanchez, Julian; Rogers, Wendy A.; Fisk, Arthur D.; Rovira, Ericka

    2015-01-01

    An obstacle detection task supported by “imperfect” automation was used with the goal of understanding the effects of automation error types and age on automation reliance. Sixty younger and sixty older adults interacted with a multi-task simulation of an agricultural vehicle (i.e. a virtual harvesting combine). The simulator included an obstacle detection task and a fully manual tracking task. A micro-level analysis provided insight into the way reliance patterns change over time. The results indicated that there are distinct patterns of reliance that develop as a function of error type. A prevalence of automation false alarms led participants to under-rely on the automation during alarm states while over relying on it during non-alarms states. Conversely, a prevalence of automation misses led participants to over-rely on automated alarms and under-rely on the automation during non-alarm states. Older adults adjusted their behavior according to the characteristics of the automation similarly to younger adults, although it took them longer to do so. The results of this study suggest the relationship between automation reliability and reliance depends on the prevalence of specific errors and on the state of the system. Understanding the effects of automation detection criterion settings on human-automation interaction can help designers of automated systems make predictions about human behavior and system performance as a function of the characteristics of the automation. PMID:25642142

  7. Understanding reliance on automation: effects of error type, error distribution, age and experience.

    PubMed

    Sanchez, Julian; Rogers, Wendy A; Fisk, Arthur D; Rovira, Ericka

    2014-03-01

    An obstacle detection task supported by "imperfect" automation was used with the goal of understanding the effects of automation error types and age on automation reliance. Sixty younger and sixty older adults interacted with a multi-task simulation of an agricultural vehicle (i.e. a virtual harvesting combine). The simulator included an obstacle detection task and a fully manual tracking task. A micro-level analysis provided insight into the way reliance patterns change over time. The results indicated that there are distinct patterns of reliance that develop as a function of error type. A prevalence of automation false alarms led participants to under-rely on the automation during alarm states while over relying on it during non-alarms states. Conversely, a prevalence of automation misses led participants to over-rely on automated alarms and under-rely on the automation during non-alarm states. Older adults adjusted their behavior according to the characteristics of the automation similarly to younger adults, although it took them longer to do so. The results of this study suggest the relationship between automation reliability and reliance depends on the prevalence of specific errors and on the state of the system. Understanding the effects of automation detection criterion settings on human-automation interaction can help designers of automated systems make predictions about human behavior and system performance as a function of the characteristics of the automation.

  8. Designing to Control Flight Crew Errors

    NASA Technical Reports Server (NTRS)

    Schutte, Paul C.; Willshire, Kelli F.

    1997-01-01

    It is widely accepted that human error is a major contributing factor in aircraft accidents. There has been a significant amount of research in why these errors occurred, and many reports state that the design of flight deck can actually dispose humans to err. This research has led to the call for changes in design according to human factors and human-centered principles. The National Aeronautics and Space Administration's (NASA) Langley Research Center has initiated an effort to design a human-centered flight deck from a clean slate (i.e., without constraints of existing designs.) The effort will be based on recent research in human-centered design philosophy and mission management categories. This design will match the human's model of the mission and function of the aircraft to reduce unnatural or non-intuitive interfaces. The product of this effort will be a flight deck design description, including training and procedures, and a cross reference or paper trail back to design hypotheses, and an evaluation of the design. The present paper will discuss the philosophy, process, and status of this design effort.

  9. Observing Reasonable Consumers.

    ERIC Educational Resources Information Center

    Silber, Norman I.

    1991-01-01

    Although courts and legislators usually set legal standards that correspond to empirical knowledge of human behavior, recent developments in behavioral psychology have led courts to appreciate the limits and errors in consumer decision making. "Reasonable consumer" standards that are congruent with cognitive reality should be developed.…

  10. Understanding the nature of errors in nursing: using a model to analyse critical incident reports of errors which had resulted in an adverse or potentially adverse event.

    PubMed

    Meurier, C E

    2000-07-01

    Human errors are common in clinical practice, but they are under-reported. As a result, very little is known of the types, antecedents and consequences of errors in nursing practice. This limits the potential to learn from errors and to make improvement in the quality and safety of nursing care. The aim of this study was to use an Organizational Accident Model to analyse critical incidents of errors in nursing. Twenty registered nurses were invited to produce a critical incident report of an error (which had led to an adverse event or potentially could have led to an adverse event) they had made in their professional practice and to write down their responses to the error using a structured format. Using Reason's Organizational Accident Model, supplemental information was then collected from five of the participants by means of an individual in-depth interview to explore further issues relating to the incidents they had reported. The detailed analysis of one of the incidents is discussed in this paper, demonstrating the effectiveness of this approach in providing insight into the chain of events which may lead to an adverse event. The case study approach using critical incidents of clinical errors was shown to provide relevant information regarding the interaction of organizational factors, local circumstances and active failures (errors) in producing an adverse or potentially adverse event. It is suggested that more use should be made of this approach to understand how errors are made in practice and to take appropriate preventative measures.

  11. Review of Significant Incidents and Close Calls in Human Spaceflight from a Human Factors Perspective

    NASA Technical Reports Server (NTRS)

    Silva-Martinez, Jackelynne; Ellenberger, Richard; Dory, Jonathan

    2017-01-01

    This project aims to identify poor human factors design decisions that led to error-prone systems, or did not facilitate the flight crew making the right choices; and to verify that NASA is effectively preventing similar incidents from occurring again. This analysis was performed by reviewing significant incidents and close calls in human spaceflight identified by the NASA Johnson Space Center Safety and Mission Assurance Flight Safety Office. The review of incidents shows whether the identified human errors were due to the operational phase (flight crew and ground control) or if they initiated at the design phase (includes manufacturing and test). This classification was performed with the aid of the NASA Human Systems Integration domains. This in-depth analysis resulted in a tool that helps with the human factors classification of significant incidents and close calls in human spaceflight, which can be used to identify human errors at the operational level, and how they were or should be minimized. Current governing documents on human systems integration for both government and commercial crew were reviewed to see if current requirements, processes, training, and standard operating procedures protect the crew and ground control against these issues occurring in the future. Based on the findings, recommendations to target those areas are provided.

  12. Applying human factors principles to alert design increases efficiency and reduces prescribing errors in a scenario-based simulation

    PubMed Central

    Russ, Alissa L; Zillich, Alan J; Melton, Brittany L; Russell, Scott A; Chen, Siying; Spina, Jeffrey R; Weiner, Michael; Johnson, Elizabette G; Daggy, Joanne K; McManus, M Sue; Hawsey, Jason M; Puleo, Anthony G; Doebbeling, Bradley N; Saleem, Jason J

    2014-01-01

    Objective To apply human factors engineering principles to improve alert interface design. We hypothesized that incorporating human factors principles into alerts would improve usability, reduce workload for prescribers, and reduce prescribing errors. Materials and methods We performed a scenario-based simulation study using a counterbalanced, crossover design with 20 Veterans Affairs prescribers to compare original versus redesigned alerts. We redesigned drug–allergy, drug–drug interaction, and drug–disease alerts based upon human factors principles. We assessed usability (learnability of redesign, efficiency, satisfaction, and usability errors), perceived workload, and prescribing errors. Results Although prescribers received no training on the design changes, prescribers were able to resolve redesigned alerts more efficiently (median (IQR): 56 (47) s) compared to the original alerts (85 (71) s; p=0.015). In addition, prescribers rated redesigned alerts significantly higher than original alerts across several dimensions of satisfaction. Redesigned alerts led to a modest but significant reduction in workload (p=0.042) and significantly reduced the number of prescribing errors per prescriber (median (range): 2 (1–5) compared to original alerts: 4 (1–7); p=0.024). Discussion Aspects of the redesigned alerts that likely contributed to better prescribing include design modifications that reduced usability-related errors, providing clinical data closer to the point of decision, and displaying alert text in a tabular format. Displaying alert text in a tabular format may help prescribers extract information quickly and thereby increase responsiveness to alerts. Conclusions This simulation study provides evidence that applying human factors design principles to medication alerts can improve usability and prescribing outcomes. PMID:24668841

  13. Applying human factors principles to alert design increases efficiency and reduces prescribing errors in a scenario-based simulation.

    PubMed

    Russ, Alissa L; Zillich, Alan J; Melton, Brittany L; Russell, Scott A; Chen, Siying; Spina, Jeffrey R; Weiner, Michael; Johnson, Elizabette G; Daggy, Joanne K; McManus, M Sue; Hawsey, Jason M; Puleo, Anthony G; Doebbeling, Bradley N; Saleem, Jason J

    2014-10-01

    To apply human factors engineering principles to improve alert interface design. We hypothesized that incorporating human factors principles into alerts would improve usability, reduce workload for prescribers, and reduce prescribing errors. We performed a scenario-based simulation study using a counterbalanced, crossover design with 20 Veterans Affairs prescribers to compare original versus redesigned alerts. We redesigned drug-allergy, drug-drug interaction, and drug-disease alerts based upon human factors principles. We assessed usability (learnability of redesign, efficiency, satisfaction, and usability errors), perceived workload, and prescribing errors. Although prescribers received no training on the design changes, prescribers were able to resolve redesigned alerts more efficiently (median (IQR): 56 (47) s) compared to the original alerts (85 (71) s; p=0.015). In addition, prescribers rated redesigned alerts significantly higher than original alerts across several dimensions of satisfaction. Redesigned alerts led to a modest but significant reduction in workload (p=0.042) and significantly reduced the number of prescribing errors per prescriber (median (range): 2 (1-5) compared to original alerts: 4 (1-7); p=0.024). Aspects of the redesigned alerts that likely contributed to better prescribing include design modifications that reduced usability-related errors, providing clinical data closer to the point of decision, and displaying alert text in a tabular format. Displaying alert text in a tabular format may help prescribers extract information quickly and thereby increase responsiveness to alerts. This simulation study provides evidence that applying human factors design principles to medication alerts can improve usability and prescribing outcomes. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  14. A psychologist's view of validating aviation systems

    NASA Technical Reports Server (NTRS)

    Stein, Earl S.; Wagner, Dan

    1994-01-01

    All systems, no matter what they are designed to do, have shortcomings that may make them less productive than was hoped during the initial development. Such shortcomings can arise at any stage of development: from conception to the end of the implementation life cycle. While systems failure and errors of a lesser magnitude can occur as a function of mechanical or software breakdown, the majority of such problems, in aviation are usually laid on the shoulders of the human operator and, to a lesser extent, on human factors. The operator bears the responsibility and blame even though, from a human factors perspective, error may have been designed into the system. Human factors is not a new concept in aviation. The name may be new, but the issues related to operators in the loop date back to the industrial revolution of the nineteenth century and certainly to the aviation build-up for World War I. During this first global confrontation, military services from all sides discovered rather quickly that poor selection and training led to drastically increased personnel losses. While hardware design became an issue later, the early efforts were primarily focused on increased care in pilot selection and on their training. This actually involved early labor-intensive simulation, using such devices as sticks and chairs mounted on rope networks which could be manually moved in response to control input. The use of selection criteria and improved training led to more viable person-machine systems. More pilots survived training and their first ten missions in the air, a rule of thumb arrived at by experience which predicted ultimate survival better than any other. This rule was to hold through World War II. At that time, personnel selection and training became very sophisticated based on previous standards. Also, many psychologists were drafted into Army Air Corps programs which were geared towards refining the human factor. However, despite the talent involved in these programs and the tremendous build-up of aviation during the war, there were still aircraft designs that were man killers (no sexism implied since all combat pilots were men). One classic design error that was identified fifty years ago was the multipointer altimeter, which could easily be misread especially by a pilot under considerable task load. It has led to flying fully operational aircraft into the terrain. The authors of the research which formally identified this problem put 'Human Errors' in quotes to express their dissatisfaction with the traditional approach to accident investigation. It traditionally places the burden of guilt on the operator. Some of these altimeters still exist in older aircraft to this day.

  15. Comparison of software and human observers in reading images of the CDMAM test object to assess digital mammography systems

    NASA Astrophysics Data System (ADS)

    Young, Kenneth C.; Cook, James J. H.; Oduko, Jennifer M.; Bosmans, Hilde

    2006-03-01

    European Guidelines for quality control in digital mammography specify minimum and achievable standards of image quality in terms of threshold contrast, based on readings of images of the CDMAM test object by human observers. However this is time-consuming and has large inter-observer error. To overcome these problems a software program (CDCOM) is available to automatically read CDMAM images, but the optimal method of interpreting the output is not defined. This study evaluates methods of determining threshold contrast from the program, and compares these to human readings for a variety of mammography systems. The methods considered are (A) simple thresholding (B) psychometric curve fitting (C) smoothing and interpolation and (D) smoothing and psychometric curve fitting. Each method leads to similar threshold contrasts but with different reproducibility. Method (A) had relatively poor reproducibility with a standard error in threshold contrast of 18.1 +/- 0.7%. This was reduced to 8.4% by using a contrast-detail curve fitting procedure. Method (D) had the best reproducibility with an error of 6.7%, reducing to 5.1% with curve fitting. A panel of 3 human observers had an error of 4.4% reduced to 2.9 % by curve fitting. All automatic methods led to threshold contrasts that were lower than for humans. The ratio of human to program threshold contrasts varied with detail diameter and was 1.50 +/- .04 (sem) at 0.1mm and 1.82 +/- .06 at 0.25mm for method (D). There were good correlations between the threshold contrast determined by humans and the automated methods.

  16. Human performance in the modern cockpit

    NASA Technical Reports Server (NTRS)

    Dismukes, R. K.; Cohen, M. M.

    1992-01-01

    This panel was organized by the Aerospace Human Factors Committee to illustrate behavioral research on the perceptual, cognitive, and group processes that determine crew effectiveness in modern cockpits. Crew reactions to the introduction of highly automated systems in the cockpit will be reported on. Automation can improve operational capabilities and efficiency and can reduce some types of human error, but may also introduce entirely new opportunities for error. The problem solving and decision making strategies used by crews led by captains with various personality profiles will be discussed. Also presented will be computational approaches to modeling the cognitive demands of cockpit operations and the cognitive capabilities and limitations of crew members. Factors contributing to aircrew deviations from standard operating procedures and misuse of checklist, often leading to violations, incidents, or accidents will be examined. The mechanisms of visual perception pilots use in aircraft control and the implications of these mechanisms for effective design of visual displays will be discussed.

  17. Safe prescribing: a titanic challenge

    PubMed Central

    Routledge, Philip A

    2012-01-01

    The challenge to achieve safe prescribing merits the adjective ‘titanic’. The organisational and human errors leading to poor prescribing (e.g. underprescribing, overprescribing, misprescribing or medication errors) have parallels in the organisational and human errors that led to the loss of the Titanic 100 years ago this year. Prescribing can be adversely affected by communication failures, critical conditions, complacency, corner cutting, callowness and a lack of courage of conviction, all of which were also factors leading to the Titanic tragedy. These issues need to be addressed by a commitment to excellence, the final component of the ‘Seven C's’. Optimal prescribing is dependent upon close communication and collaborative working between highly trained health professionals, whose role is to ensure maximum clinical effectiveness, whilst also protecting their patients from avoidable harm. Since humans are prone to error, and the environments in which they work are imperfect, it is not surprising that medication errors are common, occurring more often during the prescribing stage than during dispensing or administration. A commitment to excellence in prescribing includes a continued focus on lifelong learning (including interprofessional learning) in pharmacology and therapeutics. This should be accompanied by improvements in the clinical working environment of prescribers, and the encouragement of a strong safety culture (including reporting of adverse incidents as well as suspected adverse drug reactions whenever appropriate). Finally, members of the clinical team must be prepared to challenge each other, when necessary, to ensure that prescribing combines the highest likelihood of benefit with the lowest potential for harm. PMID:22738396

  18. ‘Why should I care?’ Challenging free will attenuates neural reaction to errors

    PubMed Central

    Pourtois, Gilles; Brass, Marcel

    2015-01-01

    Whether human beings have free will has been a philosophical question for centuries. The debate about free will has recently entered the public arena through mass media and newspaper articles commenting on scientific findings that leave little to no room for free will. Previous research has shown that encouraging such a deterministic perspective influences behavior, namely by promoting cursory and antisocial behavior. Here we propose that such behavioral changes may, at least partly, stem from a more basic neurocognitive process related to response monitoring, namely a reduced error detection mechanism. Our results show that the error-related negativity, a neural marker of error detection, was reduced in individuals led to disbelieve in free will. This finding shows that reducing the belief in free will has a specific impact on error detection mechanisms. More generally, it suggests that abstract beliefs about intentional control can influence basic and automatic processes related to action control. PMID:24795441

  19. Safe prescribing: a titanic challenge.

    PubMed

    Routledge, Philip A

    2012-10-01

    The challenge to achieve safe prescribing merits the adjective 'titanic'. The organisational and human errors leading to poor prescribing (e.g. underprescribing, overprescribing, misprescribing or medication errors) have parallels in the organisational and human errors that led to the loss of the Titanic 100 years ago this year. Prescribing can be adversely affected by communication failures, critical conditions, complacency, corner cutting, callowness and a lack of courage of conviction, all of which were also factors leading to the Titanic tragedy. These issues need to be addressed by a commitment to excellence, the final component of the 'Seven C's'. Optimal prescribing is dependent upon close communication and collaborative working between highly trained health professionals, whose role is to ensure maximum clinical effectiveness, whilst also protecting their patients from avoidable harm. Since humans are prone to error, and the environments in which they work are imperfect, it is not surprising that medication errors are common, occurring more often during the prescribing stage than during dispensing or administration. A commitment to excellence in prescribing includes a continued focus on lifelong learning (including interprofessional learning) in pharmacology and therapeutics. This should be accompanied by improvements in the clinical working environment of prescribers, and the encouragement of a strong safety culture (including reporting of adverse incidents as well as suspected adverse drug reactions whenever appropriate). Finally, members of the clinical team must be prepared to challenge each other, when necessary, to ensure that prescribing combines the highest likelihood of benefit with the lowest potential for harm. © 2012 The Author. British Journal of Clinical Pharmacology © 2012 The British Pharmacological Society.

  20. "First, know thyself": cognition and error in medicine.

    PubMed

    Elia, Fabrizio; Aprà, Franco; Verhovez, Andrea; Crupi, Vincenzo

    2016-04-01

    Although error is an integral part of the world of medicine, physicians have always been little inclined to take into account their own mistakes and the extraordinary technological progress observed in the last decades does not seem to have resulted in a significant reduction in the percentage of diagnostic errors. The failure in the reduction in diagnostic errors, notwithstanding the considerable investment in human and economic resources, has paved the way to new strategies which were made available by the development of cognitive psychology, the branch of psychology that aims at understanding the mechanisms of human reasoning. This new approach led us to realize that we are not fully rational agents able to take decisions on the basis of logical and probabilistically appropriate evaluations. In us, two different and mostly independent modes of reasoning coexist: a fast or non-analytical reasoning, which tends to be largely automatic and fast-reactive, and a slow or analytical reasoning, which permits to give rationally founded answers. One of the features of the fast mode of reasoning is the employment of standardized rules, termed "heuristics." Heuristics lead physicians to correct choices in a large percentage of cases. Unfortunately, cases exist wherein the heuristic triggered fails to fit the target problem, so that the fast mode of reasoning can lead us to unreflectively perform actions exposing us and others to variable degrees of risk. Cognitive errors arise as a result of these cases. Our review illustrates how cognitive errors can cause diagnostic problems in clinical practice.

  1. Non-Traditional Displays for Mission Monitoring

    NASA Technical Reports Server (NTRS)

    Trujillo, Anna C.; Schutte, Paul C.

    1999-01-01

    Advances in automation capability and reliability have changed the role of humans from operating and controlling processes to simply monitoring them for anomalies. However, humans are traditionally bad monitors of highly reliable systems over time. Thus, the human is assigned a task for which he is ill equipped. We believe that this has led to the dominance of human error in process control activities such as operating transportation systems (aircraft and trains), monitoring patient health in the medical industry, and controlling plant operations. Research has shown, though, that an automated monitor can assist humans in recognizing and dealing with failures. One possible solution to this predicament is to use a polar-star display that will show deviations from normal states based on parameters that are most indicative of mission health.

  2. Fluorescence errors in integrating sphere measurements of remote phosphor type LED light sources

    NASA Astrophysics Data System (ADS)

    Keppens, A.; Zong, Y.; Podobedov, V. B.; Nadal, M. E.; Hanselaer, P.; Ohno, Y.

    2011-05-01

    The relative spectral radiant flux error caused by phosphor fluorescence during integrating sphere measurements is investigated both theoretically and experimentally. Integrating sphere and goniophotometer measurements are compared and used for model validation, while a case study provides additional clarification. Criteria for reducing fluorescence errors to a degree of negligibility as well as a fluorescence error correction method based on simple matrix algebra are presented. Only remote phosphor type LED light sources are studied because of their large phosphor surfaces and high application potential in general lighting.

  3. Synergistic Effects of Phase Folding and Wavelet Denoising with Applications in Light Curve Analysis

    DTIC Science & Technology

    2016-09-15

    future research. 3 II. Astrostatistics Historically, astronomy has been a data-driven science. Larger and more precise data sets have led to the...forthcoming Large Synoptic Survey Telescope (LSST), the human-centric approach to astronomy is becoming strained [13, 24, 25, 63]. More than ever...process. One use of the filtering process is to remove artifacts from the data set. In the context of time domain astronomy , an artifact is an error in

  4. The influence of LED lighting on task accuracy: time of day, gender and myopia effects

    NASA Astrophysics Data System (ADS)

    Rao, Feng; Chan, A. H. S.; Zhu, Xi-Fang

    2017-07-01

    In this research, task errors were obtained during performance of a marker location task in which the markers were shown on a computer screen under nine LED lighting conditions; three illuminances (100, 300 and 500 lx) and three color temperatures (3000, 4500 and 6500 K). A total of 47 students participated voluntarily in these tasks. The results showed that task errors in the morning were small and nearly constant across the nine lighting conditions. However in the afternoon, the task errors were significantly larger and varied across lighting conditions. The largest errors for the afternoon session occurred when the color temperature was 4500 K and illuminance 500 lx. There were significant differences between task errors in the morning and afternoon sessions. No significant difference between females and males was found. Task errors for high myopia students were significantly larger than for the low myopia students under the same lighting conditions. In summary, the influence of LED lighting on task accuracy during office hours was not gender dependent, but was time of day and myopia dependent.

  5. Medical error and systems of signaling: conceptual and linguistic definition.

    PubMed

    Smorti, Andrea; Cappelli, Francesco; Zarantonello, Roberta; Tani, Franca; Gensini, Gian Franco

    2014-09-01

    In recent years the issue of patient safety has been the subject of detailed investigations, particularly as a result of the increasing attention from the patients and the public on the problem of medical error. The purpose of this work is firstly to define the classification of medical errors, which are distinguished between two perspectives: those that are personal, and those that are caused by the system. Furthermore we will briefly review some of the main methods used by healthcare organizations to identify and analyze errors. During this discussion it has been determined that, in order to constitute a practical, coordinated and shared action to counteract the error, it is necessary to promote an analysis that considers all elements (human, technological and organizational) that contribute to the occurrence of a critical event. Therefore, it is essential to create a culture of constructive confrontation that encourages an open and non-punitive debate about the causes that led to error. In conclusion we have thus underlined that in health it is essential to affirm a system discussion that considers the error as a learning source, and as a result of the interaction between the individual and the organization. In this way, one should encourage a non-guilt bearing discussion on evident errors and on those which are not immediately identifiable, in order to create the conditions that recognize and corrects the error even before it produces negative consequences.

  6. The Passive Series Stiffness That Optimizes Torque Tracking for a Lower-Limb Exoskeleton in Human Walking

    PubMed Central

    Zhang, Juanjuan; Collins, Steven H.

    2017-01-01

    This study uses theory and experiments to investigate the relationship between the passive stiffness of series elastic actuators and torque tracking performance in lower-limb exoskeletons during human walking. Through theoretical analysis with our simplified system model, we found that the optimal passive stiffness matches the slope of the desired torque-angle relationship. We also conjectured that a bandwidth limit resulted in a maximum rate of change in torque error that can be commanded through control input, which is fixed across desired and passive stiffness conditions. This led to hypotheses about the interactions among optimal control gains, passive stiffness and desired quasi-stiffness. Walking experiments were conducted with multiple angle-based desired torque curves. The observed lowest torque tracking errors identified for each combination of desired and passive stiffnesses were shown to be linearly proportional to the magnitude of the difference between the two stiffnesses. The proportional gains corresponding to the lowest observed errors were seen inversely proportional to passive stiffness values and to desired stiffness. These findings supported our hypotheses, and provide guidance to application-specific hardware customization as well as controller design for torque-controlled robotic legged locomotion. PMID:29326580

  7. Ketamine Effects on Memory Reconsolidation Favor a Learning Model of Delusions

    PubMed Central

    Gardner, Jennifer M.; Piggot, Jennifer S.; Turner, Danielle C.; Everitt, Jessica C.; Arana, Fernando Sergio; Morgan, Hannah L.; Milton, Amy L.; Lee, Jonathan L.; Aitken, Michael R. F.; Dickinson, Anthony; Everitt, Barry J.; Absalom, Anthony R.; Adapa, Ram; Subramanian, Naresh; Taylor, Jane R.; Krystal, John H.; Fletcher, Paul C.

    2013-01-01

    Delusions are the persistent and often bizarre beliefs that characterise psychosis. Previous studies have suggested that their emergence may be explained by disturbances in prediction error-dependent learning. Here we set up complementary studies in order to examine whether such a disturbance also modulates memory reconsolidation and hence explains their remarkable persistence. First, we quantified individual brain responses to prediction error in a causal learning task in 18 human subjects (8 female). Next, a placebo-controlled within-subjects study of the impact of ketamine was set up on the same individuals. We determined the influence of this NMDA receptor antagonist (previously shown to induce aberrant prediction error signal and lead to transient alterations in perception and belief) on the evolution of a fear memory over a 72 hour period: they initially underwent Pavlovian fear conditioning; 24 hours later, during ketamine or placebo administration, the conditioned stimulus (CS) was presented once, without reinforcement; memory strength was then tested again 24 hours later. Re-presentation of the CS under ketamine led to a stronger subsequent memory than under placebo. Moreover, the degree of strengthening correlated with individual vulnerability to ketamine's psychotogenic effects and with prediction error brain signal. This finding was partially replicated in an independent sample with an appetitive learning procedure (in 8 human subjects, 4 female). These results suggest a link between altered prediction error, memory strength and psychosis. They point to a core disruption that may explain not only the emergence of delusional beliefs but also their persistence. PMID:23776445

  8. DNA polymerase η mutational signatures are found in a variety of different types of cancer.

    PubMed

    Rogozin, Igor B; Goncearenco, Alexander; Lada, Artem G; De, Subhajyoti; Yurchenko, Vyacheslav; Nudelman, German; Panchenko, Anna R; Cooper, David N; Pavlov, Youri I

    2018-01-01

    DNA polymerase (pol) η is a specialized error-prone polymerase with at least two quite different and contrasting cellular roles: to mitigate the genetic consequences of solar UV irradiation, and promote somatic hypermutation in the variable regions of immunoglobulin genes. Misregulation and mistargeting of pol η can compromise genome integrity. We explored whether the mutational signature of pol η could be found in datasets of human somatic mutations derived from normal and cancer cells. A substantial excess of single and tandem somatic mutations within known pol η mutable motifs was noted in skin cancer as well as in many other types of human cancer, suggesting that somatic mutations in A:T bases generated by DNA polymerase η are a common feature of tumorigenesis. Another peculiarity of pol ηmutational signatures, mutations in YCG motifs, led us to speculate that error-prone DNA synthesis opposite methylated CpG dinucleotides by misregulated pol η in tumors might constitute an additional mechanism of cytosine demethylation in this hypermutable dinucleotide.

  9. Impact of an antiretroviral stewardship strategy on medication error rates.

    PubMed

    Shea, Katherine M; Hobbs, Athena Lv; Shumake, Jason D; Templet, Derek J; Padilla-Tolentino, Eimeira; Mondy, Kristin E

    2018-05-02

    The impact of an antiretroviral stewardship strategy on medication error rates was evaluated. This single-center, retrospective, comparative cohort study included patients at least 18 years of age infected with human immunodeficiency virus (HIV) who were receiving antiretrovirals and admitted to the hospital. A multicomponent approach was developed and implemented and included modifications to the order-entry and verification system, pharmacist education, and a pharmacist-led antiretroviral therapy checklist. Pharmacists performed prospective audits using the checklist at the time of order verification. To assess the impact of the intervention, a retrospective review was performed before and after implementation to assess antiretroviral errors. Totals of 208 and 24 errors were identified before and after the intervention, respectively, resulting in a significant reduction in the overall error rate ( p < 0.001). In the postintervention group, significantly lower medication error rates were found in both patient admissions containing at least 1 medication error ( p < 0.001) and those with 2 or more errors ( p < 0.001). Significant reductions were also identified in each error type, including incorrect/incomplete medication regimen, incorrect dosing regimen, incorrect renal dose adjustment, incorrect administration, and the presence of a major drug-drug interaction. A regression tree selected ritonavir as the only specific medication that best predicted more errors preintervention ( p < 0.001); however, no antiretrovirals reliably predicted errors postintervention. An antiretroviral stewardship strategy for hospitalized HIV patients including prospective audit by staff pharmacists through use of an antiretroviral medication therapy checklist at the time of order verification decreased error rates. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  10. An Automatic Quality Control Pipeline for High-Throughput Screening Hit Identification.

    PubMed

    Zhai, Yufeng; Chen, Kaisheng; Zhong, Yang; Zhou, Bin; Ainscow, Edward; Wu, Ying-Ta; Zhou, Yingyao

    2016-09-01

    The correction or removal of signal errors in high-throughput screening (HTS) data is critical to the identification of high-quality lead candidates. Although a number of strategies have been previously developed to correct systematic errors and to remove screening artifacts, they are not universally effective and still require fair amount of human intervention. We introduce a fully automated quality control (QC) pipeline that can correct generic interplate systematic errors and remove intraplate random artifacts. The new pipeline was first applied to ~100 large-scale historical HTS assays; in silico analysis showed auto-QC led to a noticeably stronger structure-activity relationship. The method was further tested in several independent HTS runs, where QC results were sampled for experimental validation. Significantly increased hit confirmation rates were obtained after the QC steps, confirming that the proposed method was effective in enriching true-positive hits. An implementation of the algorithm is available to the screening community. © 2016 Society for Laboratory Automation and Screening.

  11. Do dogs (Canis lupus familiaris) make counterproductive choices because they are sensitive to human ostensive cues?

    PubMed

    Marshall-Pescini, Sarah; Passalacqua, Chiara; Miletto Petrazzini, Maria Elena; Valsecchi, Paola; Prato-Previde, Emanuela

    2012-01-01

    Dogs appear to be sensitive to human ostensive communicative cues in a variety of situations, however there is still a measure of controversy as to the way in which these cues influence human-dog interactions. There is evidence for instance that dogs can be led into making evaluation errors in a quantity discrimination task, for example losing their preference for a larger food quantity if a human shows a preference for a smaller one, yet there is, so far, no explanation for this phenomenon. Using a modified version of this task, in the current study we investigated whether non-social, social or communicative cues (alone or in combination) cause dogs to go against their preference for the larger food quantity. Results show that dogs' evaluation errors are indeed caused by a social bias, but, somewhat contrary to previous studies, they highlight the potent effect of stimulus enhancement (handling the target) in influencing the dogs' response. A mild influence on the dog's behaviour was found only when different ostensive cues (and no handling of the target) were used in combination, suggesting their cumulative effect. The discussion addresses possible motives for discrepancies with previous studies suggesting that both the intentionality and the directionality of the action may be important in causing dogs' social biases.

  12. Do Dogs (Canis lupus familiaris) Make Counterproductive Choices Because They Are Sensitive to Human Ostensive Cues?

    PubMed Central

    Marshall-Pescini, Sarah; Passalacqua, Chiara; Miletto Petrazzini, Maria Elena; Valsecchi, Paola; Prato-Previde, Emanuela

    2012-01-01

    Dogs appear to be sensitive to human ostensive communicative cues in a variety of situations, however there is still a measure of controversy as to the way in which these cues influence human-dog interactions. There is evidence for instance that dogs can be led into making evaluation errors in a quantity discrimination task, for example losing their preference for a larger food quantity if a human shows a preference for a smaller one, yet there is, so far, no explanation for this phenomenon. Using a modified version of this task, in the current study we investigated whether non-social, social or communicative cues (alone or in combination) cause dogs to go against their preference for the larger food quantity. Results show that dogs' evaluation errors are indeed caused by a social bias, but, somewhat contrary to previous studies, they highlight the potent effect of stimulus enhancement (handling the target) in influencing the dogs' response. A mild influence on the dog's behaviour was found only when different ostensive cues (and no handling of the target) were used in combination, suggesting their cumulative effect. The discussion addresses possible motives for discrepancies with previous studies suggesting that both the intentionality and the directionality of the action may be important in causing dogs' social biases. PMID:22558150

  13. Human factors engineering and design validation for the redesigned follitropin alfa pen injection device.

    PubMed

    Mahony, Mary C; Patterson, Patricia; Hayward, Brooke; North, Robert; Green, Dawne

    2015-05-01

    To demonstrate, using human factors engineering (HFE), that a redesigned, pre-filled, ready-to-use, pre-asembled follitropin alfa pen can be used to administer prescribed follitropin alfa doses safely and accurately. A failure modes and effects analysis identified hazards and harms potentially caused by use errors; risk-control measures were implemented to ensure acceptable device use risk management. Participants were women with infertility, their significant others, and fertility nurse (FN) professionals. Preliminary testing included 'Instructions for Use' (IFU) and pre-validation studies. Validation studies used simulated injections in a representative use environment; participants received prior training on pen use. User performance in preliminary testing led to IFU revisions and a change to outer needle cap design to mitigate needle stick potential. In the first validation study (49 users, 343 simulated injections), in the FN group, one observed critical use error resulted in a device design modification and another in an IFU change. A second validation study tested the mitigation strategies; previously reported use errors were not repeated. Through an iterative process involving a series of studies, modifications were made to the pen design and IFU. Simulated-use testing demonstrated that the redesigned pen can be used to administer follitropin alfa effectively and safely.

  14. Adaptive 84.44-190 Mbit/s phosphor-LED wireless communication utilizing no blue filter at practical transmission distance.

    PubMed

    Yeh, C H; Chow, C W; Chen, H Y; Chen, J; Liu, Y L

    2014-04-21

    We propose and experimentally demonstrate a white-light phosphor-LED visible light communication (VLC) system with an adaptive 84.44 to 190 Mbit/s 16 quadrature-amplitude-modulation (QAM) orthogonal-frequency-division-multiplexing (OFDM) signal utilizing bit-loading method. Here, the optimal analogy pre-equalization design is performed at LED transmitter (Tx) side and no blue filter is used at the Rx side. Hence, the ~1 MHz modulation bandwidth of phosphor-LED could be extended to 30 MHz. In addition, the measured bit error rates (BERs) of < 3.8 × 10(-3) [forward error correction (FEC) threshold] at different measured data rates can be achieved at practical transmission distances of 0.75 to 2 m.

  15. TRAC based sensing for autonomous rendezvous

    NASA Technical Reports Server (NTRS)

    Everett, Louis J.; Monford, Leo

    1991-01-01

    The Targeting Reflective Alignment Concept (TRAC) sensor is to be used in an effort to support an Autonomous Rendezvous and Docking (AR&D) flight experiment. The TRAC sensor uses a fixed-focus, fixed-iris CCD camera and a target that is a combination of active and passive components. The system experiment is anticipated to fly in 1994 using two Commercial Experiment Transporters (COMET's). The requirements for the sensor are: bearing error less than or equal to 0.075 deg; bearing error rate less than 0.3 deg/sec; attitude error less than 0.5 deg.; and attitude rate error less than 2.0 deg/sec. The range requirement depends on the range and the range rate of the vehicle. The active component of the target is several 'kilo-bright' LED's that can emit 2500 millicandela with 40 milliwatts of input power. Flashing the lights in a known pattern eliminates background illumination. The system should be able to rendezvous from 300 meters all the way to capture. A question that arose during the presentation: What is the life time of the LED's and their sensitivity to radiation? The LED's should be manufactured to Military Specifications, coated with silicon dioxide, and all other space qualified precautions should be taken. The LED's will not be on all the time so they should easily last the two-year mission.

  16. Color dependence with horizontal-viewing angle and colorimetric characterization of two displays using different backlighting

    NASA Astrophysics Data System (ADS)

    Castro, José J.; Pozo, Antonio M.; Rubiño, Manuel

    2013-11-01

    In this work we studied the color dependence with a horizontal-viewing angle and colorimetric characterization of two liquid-crystal displays (LCD) using two different backlighting: Cold Cathode Fluorescent Lamps (CCFLs) and light-emitting diodes (LEDs). The LCDs studied had identical resolution, size, and technology (TFT - thin film transistor). The colorimetric measurements were made with the spectroradiometer SpectraScan PR-650 following the procedure recommended in the European guideline EN 61747-6. For each display, we measured at the centre of the screen the chromaticity coordinates at horizontal viewing angles of 0, 20, 40, 60 and 80 degrees for the achromatic (A), red (R), green (G) and blue (B) channels. Results showed a greater color-gamut area for the display with LED backlight, compared with the CCFL backlight, showing a greater range of colors perceptible by human vision. This color-gamut area diminished with viewing angle for both displays. Higher differences between trends for viewing angles were observed in the LED-backlight, especially for the R- and G-channels, demonstrating a higher variability of the chromaticity coordinates with viewing angle. The best additivity was reached by the LED-backlight display (a lower error percentage). LED-backlight display provided better color performance of visualization.

  17. A human factors engineering paradigm for patient safety: designing to support the performance of the healthcare professional

    PubMed Central

    Karsh, B‐T; Holden, R J; Alper, S J; Or, C K L

    2006-01-01

    The goal of improving patient safety has led to a number of paradigms for directing improvement efforts. The main paradigms to date have focused on reducing injuries, reducing errors, or improving evidence based practice. In this paper a human factors engineering paradigm is proposed that focuses on designing systems to improve the performance of healthcare professionals and to reduce hazards. Both goals are necessary, but neither is sufficient to improve safety. We suggest that the road to patient and employee safety runs through the healthcare professional who delivers care. To that end, several arguments are provided to show that designing healthcare delivery systems to support healthcare professional performance and hazard reduction should yield significant patient safety benefits. The concepts of human performance and hazard reduction are explained. PMID:17142611

  18. Conflicting calculations of pelvic incidence and pelvic tilt secondary to transitional lumbosacral anatomy (lumbarization of S-1): case report.

    PubMed

    Crawford, Charles H; Glassman, Steven D; Gum, Jeffrey L; Carreon, Leah Y

    2017-01-01

    Advancements in the understanding of adult spinal deformity have led to a greater awareness of the role of the pelvis in maintaining sagittal balance and alignment. Pelvic incidence has emerged as a key radiographic measure and should closely match lumbar lordosis. As proper measurement of the pelvic incidence requires accurate identification of the S-1 endplate, lumbosacral transitional anatomy may lead to errors. The purpose of this study is to demonstrate how lumbosacral transitional anatomy may lead to errors in the measurement of pelvic parameters. The current case highlights one of the potential complications that can be avoided with awareness. The authors report the case of a 61-year-old man who had undergone prior lumbar surgeries and then presented with symptomatic lumbar stenosis and sagittal malalignment. Radiographs showed a lumbarized S-1. Prior numbering of the segments in previous surgical and radiology reports led to a pelvic incidence calculation of 61°. Corrected numbering of the segments using the lumbarized S-1 endplate led to a pelvic incidence calculation of 48°. Without recognition of the lumbosacral anatomy, overcorrection of the lumbar lordosis might have led to negative sagittal balance and the propensity to develop proximal junction failure. This case illustrates that improper identification of lumbosacral transitional anatomy may lead to errors that could affect clinical outcome. Awareness of this potential error may help improve patient outcomes.

  19. Student Errors in Dynamic Mathematical Environments

    ERIC Educational Resources Information Center

    Brown, Molly; Bossé, Michael J.; Chandler, Kayla

    2016-01-01

    This study investigates the nature of student errors in the context of problem solving and Dynamic Math Environments. This led to the development of the Problem Solving Action Identification Framework; this framework captures and defines all activities and errors associated with problem solving in a dynamic math environment. Found are three…

  20. Active learning: learning a motor skill without a coach.

    PubMed

    Huang, Vincent S; Shadmehr, Reza; Diedrichsen, Jörn

    2008-08-01

    When we learn a new skill (e.g., golf) without a coach, we are "active learners": we have to choose the specific components of the task on which to train (e.g., iron, driver, putter, etc.). What guides our selection of the training sequence? How do choices that people make compare with choices made by machine learning algorithms that attempt to optimize performance? We asked subjects to learn the novel dynamics of a robotic tool while moving it in four directions. They were instructed to choose their practice directions to maximize their performance in subsequent tests. We found that their choices were strongly influenced by motor errors: subjects tended to immediately repeat an action if that action had produced a large error. This strategy was correlated with better performance on test trials. However, even when participants performed perfectly on a movement, they did not avoid repeating that movement. The probability of repeating an action did not drop below chance even when no errors were observed. This behavior led to suboptimal performance. It also violated a strong prediction of current machine learning algorithms, which solve the active learning problem by choosing a training sequence that will maximally reduce the learner's uncertainty about the task. While we show that these algorithms do not provide an adequate description of human behavior, our results suggest ways to improve human motor learning by helping people choose an optimal training sequence.

  1. Human factors evaluation of teletherapy: Function and task analysis. Volume 2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaye, R.D.; Henriksen, K.; Jones, R.

    1995-07-01

    As a treatment methodology, teletherapy selectively destroys cancerous and other tissue by exposure to an external beam of ionizing radiation. Sources of radiation are either a radioactive isotope, typically Cobalt-60 (Co-60), or a linear accelerator. Records maintained by the NRC have identified instances of teletherapy misadministration where the delivered radiation dose has differed from the radiation prescription (e.g., instances where fractions were delivered to the wrong patient, to the wrong body part, or were too great or too little with respect to the defined treatment volume). Both human error and machine malfunction have led to misadministrations. Effective and safe treatmentmore » requires a concern for precision and consistency of human-human and human-machine interactions throughout the course of therapy. The present study is the first part of a series of human factors evaluations for identifying the root causes that lead to human error in the teletherapy environment. The human factors evaluations included: (1) a function and task analysis of teletherapy activities, (2) an evaluation of the human-system interfaces, (3) an evaluation of procedures used by teletherapy staff, (4) an evaluation of the training and qualifications of treatment staff (excluding the oncologists), (5) an evaluation of organizational practices and policies, and (6) an identification of problems and alternative approaches for NRC and industry attention. The present report addresses the function and task analysis of teletherapy activities and provides the foundation for the conduct of the subsequent evaluations. The report includes sections on background, methodology, a description of the function and task analysis, and use of the task analysis findings for the subsequent tasks. The function and task analysis data base also is included.« less

  2. [Epidemiology of refractive errors].

    PubMed

    Wolfram, C

    2017-07-01

    Refractive errors are very common and can lead to severe pathological changes in the eye. This article analyzes the epidemiology of refractive errors in the general population in Germany and worldwide and describes common definitions for refractive errors and clinical characteristics for pathologicaal changes. Refractive errors differ between age groups due to refractive changes during the life time and also due to generation-specific factors. Current research about the etiology of refractive errors has strengthened the influence of environmental factors, which led to new strategies for the prevention of refractive pathologies.

  3. Human Error: A Concept Analysis

    NASA Technical Reports Server (NTRS)

    Hansen, Frederick D.

    2007-01-01

    Human error is the subject of research in almost every industry and profession of our times. This term is part of our daily language and intuitively understood by most people however, it would be premature to assume that everyone's understanding of human error s the same. For example, human error is used to describe the outcome or consequence of human action, the causal factor of an accident, deliberate violations,a nd the actual action taken by a human being. As a result, researchers rarely agree on the either a specific definition or how to prevent human error. The purpose of this article is to explore the specific concept of human error using Concept Analysis as described by Walker and Avant (1995). The concept of human error is examined as currently used in the literature of a variety of industries and professions. Defining attributes and examples of model, borderline, and contrary cases are described. The antecedents and consequences of human error are also discussed and a definition of human error is offered.

  4. A wearable infrared video pupillography with multi-stimulation of consistent illumination for binocular pupil response

    NASA Astrophysics Data System (ADS)

    Mang, Ou-Yang; Ko, Mei Lan; Tsai, Yi-Chun; Chiou, Jin-Chern; Huang, Ting-Wei

    2016-03-01

    The pupil response to light can reflect various kinds of diseases which are related to physiological health. Pupillary abnormalities may be influenced on people by autonomic neuropathy, glaucoma, diabetes, genetic diseases, and high myopia. In the early stage of neuropathy, it is often asymptomatic and difficulty detectable by ophthalmologists. In addition, the position of injured nerve can lead to unsynchronized pupil response for human eyes. In our study, we design the pupilometer to measure the binocular pupil response simultaneously. It uses the different wavelength of LEDs such as white, red, green and blue light to stimulate the pupil and record the process. Therefore, the pupilometer mainly contains two systems. One is the image acquisition system, it use the two cameras modules with the same external triggered signal to capture the images of the pupil simultaneously. The other one is the illumination system. It use the boost converter ICs and LED driver ICs to supply the constant current for LED to maintain the consistent luminance in each experiments for reduced experimental error. Furthermore, the four infrared LEDs are arranged nearby the stimulating LEDs to illuminate eyes and increase contrast of image for image processing. In our design, we success to implement the function of synchronized image acquisition with the sample speed in 30 fps and the stable illumination system for precise measurement of experiment.

  5. Which non-technical skills do junior doctors require to prescribe safely? A systematic review.

    PubMed

    Dearden, Effie; Mellanby, Edward; Cameron, Helen; Harden, Jeni

    2015-12-01

    Prescribing errors are a major source of avoidable morbidity and mortality. Junior doctors write most in-hospital prescriptions and are the least experienced members of the healthcare team. This puts them at high risk of error and makes them attractive targets for interventions to improve prescription safety. Error analysis has shown a background of complex environments with multiple contributory conditions. Similar conditions in other high risk industries, such as aviation, have led to an increased understanding of so-called human factors and the use of non-technical skills (NTS) training to try to reduce error. To date no research has examined the NTS required for safe prescribing. The aim of this review was to develop a prototype NTS taxonomy for safe prescribing, by junior doctors, in hospital settings. A systematic search identified 14 studies analyzing prescribing behaviours and errors by junior doctors. Framework analysis was used to extract data from the studies and identify behaviours related to categories of NTS that might be relevant to safe and effective prescribing performance by junior doctors. Categories were derived from existing literature and inductively from the data. A prototype taxonomy of relevant categories (situational awareness, decision making, communication and team working, and task management) and elements was constructed. This prototype will form the basis of future work to create a tool that can be used for training and assessment of medical students and junior doctors to reduce prescribing error in the future. © 2015 The British Pharmacological Society.

  6. Mental models and human reasoning

    PubMed Central

    Johnson-Laird, Philip N.

    2010-01-01

    To be rational is to be able to reason. Thirty years ago psychologists believed that human reasoning depended on formal rules of inference akin to those of a logical calculus. This hypothesis ran into difficulties, which led to an alternative view: reasoning depends on envisaging the possibilities consistent with the starting point—a perception of the world, a set of assertions, a memory, or some mixture of them. We construct mental models of each distinct possibility and derive a conclusion from them. The theory predicts systematic errors in our reasoning, and the evidence corroborates this prediction. Yet, our ability to use counterexamples to refute invalid inferences provides a foundation for rationality. On this account, reasoning is a simulation of the world fleshed out with our knowledge, not a formal rearrangement of the logical skeletons of sentences. PMID:20956326

  7. Exploring the effect of diffuse reflection on indoor localization systems based on RSSI-VLC.

    PubMed

    Mohammed, Nazmi A; Elkarim, Mohammed Abd

    2015-08-10

    This work explores and evaluates the effect of diffuse light reflection on the accuracy of indoor localization systems based on visible light communication (VLC) in a high reflectivity environment using a received signal strength indication (RSSI) technique. The effect of the essential receiver (Rx) and transmitter (Tx) parameters on the localization error with different transmitted LED power and wall reflectivity factors is investigated at the worst Rx coordinates for a directed/overall link. Since this work assumes harsh operating conditions (i.e., a multipath model, high reflectivity surfaces, worst Rx position), an error of ≥ 1.46 m is found. To achieve a localization error in the range of 30 cm under these conditions with moderate LED power (i.e., P = 0.45 W), low reflectivity walls (i.e., ρ = 0.1) should be used, which would enable a localization error of approximately 7 mm at the room's center.

  8. Handling Errors as They Arise in Whole-Class Interactions

    ERIC Educational Resources Information Center

    Ingram, Jenni; Pitt, Andrea; Baldry, Fay

    2015-01-01

    There has been a long history of research into errors and their role in the teaching and learning of mathematics. This research has led to a change to pedagogical recommendations from avoiding errors to explicitly using them in lessons. In this study, 22 mathematics lessons were video-recorded and transcribed. A conversation analytic (CA) approach…

  9. The Weak Spots in Contemporary Science (and How to Fix Them)

    PubMed Central

    2017-01-01

    Simple Summary Several fraud cases, widespread failure to replicate or reproduce seminal findings, and pervasive error in the scientific literature have led to a crisis of confidence in the biomedical, behavioral, and social sciences. In this review, the author discusses some of the core findings that point at weak spots in contemporary science and considers the human factors that underlie them. He delves into the human tendencies that create errors and biases in data collection, analyses, and reporting of research results. He presents several solutions to deal with observer bias, publication bias, the researcher’s tendency to exploit degrees of freedom in their analysis of data, low statistical power, and errors in the reporting of results, with a focus on the specific challenges in animal welfare research. Abstract In this review, the author discusses several of the weak spots in contemporary science, including scientific misconduct, the problems of post hoc hypothesizing (HARKing), outcome switching, theoretical bloopers in formulating research questions and hypotheses, selective reading of the literature, selective citing of previous results, improper blinding and other design failures, p-hacking or researchers’ tendency to analyze data in many different ways to find positive (typically significant) results, errors and biases in the reporting of results, and publication bias. The author presents some empirical results highlighting problems that lower the trustworthiness of reported results in scientific literatures, including that of animal welfare studies. Some of the underlying causes of these biases are discussed based on the notion that researchers are only human and hence are not immune to confirmation bias, hindsight bias, and minor ethical transgressions. The author discusses solutions in the form of enhanced transparency, sharing of data and materials, (post-publication) peer review, pre-registration, registered reports, improved training, reporting guidelines, replication, dealing with publication bias, alternative inferential techniques, power, and other statistical tools. PMID:29186879

  10. Effects of smartphone use with and without blue light at night in healthy adults: A randomized, double-blind, cross-over, placebo-controlled comparison.

    PubMed

    Heo, Jung-Yoon; Kim, Kiwon; Fava, Maurizio; Mischoulon, David; Papakostas, George I; Kim, Min-Ji; Kim, Dong Jun; Chang, Kyung-Ah Judy; Oh, Yunhye; Yu, Bum-Hee; Jeon, Hong Jin

    2017-04-01

    Smartphones deliver light to users through Light Emitting Diode (LED) displays. Blue light is the most potent wavelength for sleep and mood. This study investigated the immediate effects of smartphone blue light LED on humans at night. We investigated changes in serum melatonin levels, cortisol levels, body temperature, and psychiatric measures with a randomized, double-blind, cross-over, placebo-controlled design of two 3-day admissions. Each subject played smartphone games with either conventional LED or suppressed blue light from 7:30 to 10:00PM (150 min). Then, they were readmitted and conducted the same procedure with the other type of smartphone. Serum melatonin levels were measured in 60-min intervals before, during and after use of the smartphones. Serum cortisol levels and body temperature were monitored every 120 min. The Profile of Mood States (POMS), Epworth Sleepiness Scale (ESS), Fatigue Severity Scale (FSS), and auditory and visual Continuous Performance Tests (CPTs) were administered. Among the 22 participants who were each admitted twice, use of blue light smartphones was associated with significantly decreased sleepiness (Cohen's d = 0.49, Z = 43.50, p = 0.04) and confusion-bewilderment (Cohen's d = 0.53, Z = 39.00, p = 0.02), and increased commission error (Cohen's d = -0.59, t = -2.64, p = 0.02). Also, users of blue light smartphones experienced a longer time to reach dim light melatonin onset 50% (2.94 vs. 2.70 h) and had increases in body temperature, serum melatonin levels, and cortisol levels, although these changes were not statistically significant. Use of blue light LED smartphones at night may negatively influence sleep and commission errors, while it may not be enough to lead to significant changes in serum melatonin and cortisol levels. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Human Error In Complex Systems

    NASA Technical Reports Server (NTRS)

    Morris, Nancy M.; Rouse, William B.

    1991-01-01

    Report presents results of research aimed at understanding causes of human error in such complex systems as aircraft, nuclear powerplants, and chemical processing plants. Research considered both slips (errors of action) and mistakes (errors of intention), and influence of workload on them. Results indicated that: humans respond to conditions in which errors expected by attempting to reduce incidence of errors; and adaptation to conditions potent influence on human behavior in discretionary situations.

  12. Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS)

    NASA Technical Reports Server (NTRS)

    Alexander, Tiffaney Miller

    2017-01-01

    Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Safety within space exploration ground processing operations, the identification and/or classification of underlying contributors and causes of human error must be identified, in order to manage human error. This research provides a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.

  13. Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS)

    NASA Technical Reports Server (NTRS)

    Alexander, Tiffaney Miller

    2017-01-01

    Research results have shown that more than half of aviation, aerospace and aeronautics mishaps/incidents are attributed to human error. As a part of Safety within space exploration ground processing operations, the identification and/or classification of underlying contributors and causes of human error must be identified, in order to manage human error. This research provides a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.

  14. Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS)

    NASA Technical Reports Server (NTRS)

    Alexander, Tiffaney Miller

    2017-01-01

    Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Quality within space exploration ground processing operations, the identification and or classification of underlying contributors and causes of human error must be identified, in order to manage human error.This presentation will provide a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.

  15. System calibration method for Fourier ptychographic microscopy

    NASA Astrophysics Data System (ADS)

    Pan, An; Zhang, Yan; Zhao, Tianyu; Wang, Zhaojun; Dan, Dan; Lei, Ming; Yao, Baoli

    2017-09-01

    Fourier ptychographic microscopy (FPM) is a recently proposed computational imaging technique with both high-resolution and wide field of view. In current FPM imaging platforms, systematic error sources come from aberrations, light-emitting diode (LED) intensity fluctuation, parameter imperfections, and noise, all of which may severely corrupt the reconstruction results with similar artifacts. Therefore, it would be unlikely to distinguish the dominating error from these degraded reconstructions without any preknowledge. In addition, systematic error is generally a mixture of various error sources in the real situation, and it cannot be separated due to their mutual restriction and conversion. To this end, we report a system calibration procedure, termed SC-FPM, to calibrate the mixed systematic errors simultaneously from an overall perspective, based on the simulated annealing algorithm, the LED intensity correction method, the nonlinear regression process, and the adaptive step-size strategy, which involves the evaluation of an error metric at each iteration step, followed by the re-estimation of accurate parameters. The performance achieved both in simulations and experiments demonstrates that the proposed method outperforms other state-of-the-art algorithms. The reported system calibration scheme improves the robustness of FPM, relaxes the experiment conditions, and does not require any preknowledge, which makes the FPM more pragmatic.

  16. ABO-incompatible blood transfusion and invasive therapeutic approaches during pediatric cardiopulmonary bypass.

    PubMed

    Aliç, Yasin; Akpek, Elif A; Dönmez, Asli; Ozkan, Süleyman; Perfusionist, Güray Yener; Aslamaci, Sait

    2008-10-01

    Human error has been identified as a major source of ABO-incompatible blood transfusion which most often results from blood being given to the wrong patient. We present a case of inadvertent administration of ABO-incompatible blood to a 6-mo-old child who underwent congenital heart surgery and discuss the use of invasive therapeutic approaches. Invasive techniques included total circulatory arrest and large-volume exchange transfusion, along with conventional ultrafiltration and plasmapheresis, which could all be performed rapidly and effectively. The combination of standard pharmacologic therapies and alternative invasive techniques after a massive ABO-incompatible blood transfusion led to a favorable outcome in our patient.

  17. Understanding human management of automation errors

    PubMed Central

    McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.

    2013-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042

  18. Understanding human management of automation errors.

    PubMed

    McBride, Sara E; Rogers, Wendy A; Fisk, Arthur D

    2014-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance.

  19. Method and apparatus for detecting timing errors in a system oscillator

    DOEpatents

    Gliebe, Ronald J.; Kramer, William R.

    1993-01-01

    A method of detecting timing errors in a system oscillator for an electronic device, such as a power supply, includes the step of comparing a system oscillator signal with a delayed generated signal and generating a signal representative of the timing error when the system oscillator signal is not identical to the delayed signal. An LED indicates to an operator that a timing error has occurred. A hardware circuit implements the above-identified method.

  20. Human operator response to error-likely situations in complex engineering systems

    NASA Technical Reports Server (NTRS)

    Morris, Nancy M.; Rouse, William B.

    1988-01-01

    The causes of human error in complex systems are examined. First, a conceptual framework is provided in which two broad categories of error are discussed: errors of action, or slips, and errors of intention, or mistakes. Conditions in which slips and mistakes might be expected to occur are identified, based on existing theories of human error. Regarding the role of workload, it is hypothesized that workload may act as a catalyst for error. Two experiments are presented in which humans' response to error-likely situations were examined. Subjects controlled PLANT under a variety of conditions and periodically provided subjective ratings of mental effort. A complex pattern of results was obtained, which was not consistent with predictions. Generally, the results of this research indicate that: (1) humans respond to conditions in which errors might be expected by attempting to reduce the possibility of error, and (2) adaptation to conditions is a potent influence on human behavior in discretionary situations. Subjects' explanations for changes in effort ratings are also explored.

  1. Auditing as part of the terminology design life cycle.

    PubMed

    Min, Hua; Perl, Yehoshua; Chen, Yan; Halper, Michael; Geller, James; Wang, Yue

    2006-01-01

    To develop and test an auditing methodology for detecting errors in medical terminologies satisfying systematic inheritance. This methodology is based on various abstraction taxonomies that provide high-level views of a terminology and highlight potentially erroneous concepts. Our auditing methodology is based on dividing concepts of a terminology into smaller, more manageable units. First, we divide the terminology's concepts into areas according to their relationships/roles. Then each multi-rooted area is further divided into partial-areas (p-areas) that are singly-rooted. Each p-area contains a set of structurally and semantically uniform concepts. Two kinds of abstraction networks, called the area taxonomy and p-area taxonomy, are derived. These taxonomies form the basis for the auditing approach. Taxonomies tend to highlight potentially erroneous concepts in areas and p-areas. Human reviewers can focus their auditing efforts on the limited number of problematic concepts following two hypotheses on the probable concentration of errors. A sample of the area taxonomy and p-area taxonomy for the Biological Process (BP) hierarchy of the National Cancer Institute Thesaurus (NCIT) was derived from the application of our methodology to its concepts. These views led to the detection of a number of different kinds of errors that are reported, and to confirmation of the hypotheses on error concentration in this hierarchy. Our auditing methodology based on area and p-area taxonomies is an efficient tool for detecting errors in terminologies satisfying systematic inheritance of roles, and thus facilitates their maintenance. This methodology concentrates a domain expert's manual review on portions of the concepts with a high likelihood of errors.

  2. Applications of integrated human error identification techniques on the chemical cylinder change task.

    PubMed

    Cheng, Ching-Min; Hwang, Sheue-Ling

    2015-03-01

    This paper outlines the human error identification (HEI) techniques that currently exist to assess latent human errors. Many formal error identification techniques have existed for years, but few have been validated to cover latent human error analysis in different domains. This study considers many possible error modes and influential factors, including external error modes, internal error modes, psychological error mechanisms, and performance shaping factors, and integrates several execution procedures and frameworks of HEI techniques. The case study in this research was the operational process of changing chemical cylinders in a factory. In addition, the integrated HEI method was used to assess the operational processes and the system's reliability. It was concluded that the integrated method is a valuable aid to develop much safer operational processes and can be used to predict human error rates on critical tasks in the plant. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  3. Attitude Error Representations for Kalman Filtering

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis; Bauer, Frank H. (Technical Monitor)

    2002-01-01

    The quaternion has the lowest dimensionality possible for a globally nonsingular attitude representation. The quaternion must obey a unit norm constraint, though, which has led to the development of an extended Kalman filter using a quaternion for the global attitude estimate and a three-component representation for attitude errors. We consider various attitude error representations for this Multiplicative Extended Kalman Filter and its second-order extension.

  4. Leader personality and crew effectiveness - A full-mission simulation experiment

    NASA Technical Reports Server (NTRS)

    Chidester, Thomas R.; Foushee, H. Clayton

    1989-01-01

    A full-mission simulation research study was completed to assess the impact of individual personality on crew performance. Using a selection algorithm described by Chidester (1987), captains were classified as fitting one of three profiles along a battery of personality assessment scales. The performances of 23 crews led by captains fitting each profile were contrasted over a one and one-half day simulated trip. Crews led by captains fitting a positive Instrumental-Expressive profile (high achievement motivation and interpersonal skill) were consistently effective and made fewer errors. Crews led by captains fitting a Negative Expressive profile (below average achievement motivation, negative expressive style, such as complaining) were consistently less effective and made more errors. Crews led by captains fitting a Negative Instrumental profile (high levels of competitiveness, Verbal Aggressiveness, and Impatience and Irritability) were less effective on the first day but equal to the best on the second day. These results underscore the importance of stable personality variables as predictors of team coordination and performance.

  5. Leader personality and crew effectiveness: Factors influencing performance in full-mission air transport simulation

    NASA Technical Reports Server (NTRS)

    Chidester, Thomas R.; Foushee, H. Clayton

    1989-01-01

    A full mission simulation research study was completed to assess the potential for selection along dimensions of personality. Using a selection algorithm described by Chidester (1987), captains were classified as fitting one of three profiles using a battery of personality assessment scales, and the performances of 23 crews led by captains fitting each profile were contrasted over a one and one-half day simulated trip. Crews led by captains fitting a Positive Instrumental Expressive profile (high achievement motivation and interpersonal skill) were consistently effective and made fewer errors. Crews led by captains fitting a Negative Communion profile (below average achievement motivation, negative expressive style, such as complaining) were consistently less effective and made more errors. Crews led by captains fitting a Negative Instrumental profile (high levels of Competitiveness, Verbal Aggressiveness, and Impatience and Irritability) were less effective on the first day but equal to the best on the second day. These results underscore the importance of stable personality variables as predictors of team coordination and performance.

  6. High-speed phosphor-LED wireless communication system utilizing no blue filter

    NASA Astrophysics Data System (ADS)

    Yeh, C. H.; Chow, C. W.; Chen, H. Y.; Chen, J.; Liu, Y. L.; Wu, Y. F.

    2014-09-01

    In this paper, we propose and investigate an adaptively 84.44 to 190 Mb/s phosphor-LED visible light communication (VLC) system at a practical transmission distance. Here, we utilize the orthogonal-frequency-division-multiplexing quadrature-amplitude-modulation (OFDM-QAM) modulation with power/bit-loading algorithm in proposed VLC system. In the experiment, the optimal analogy pre-equalization design is also performed at LED-Tx side and no blue filter is used at the Rx side for extending the modulation bandwidth from 1 MHz to 30 MHz. In addition, the corresponding free space transmission lengths are between 75 cm and 2 m under various data rates of proposed VLC. And the measured bit error rates (BERs) of < 3.8×10-3 [forward error correction (FEC) limit] at different transmission lengths and measured data rates can be also obtained. Finally, we believe that our proposed scheme could be another alternative VLC implementation in practical distance, supporting < 100 Mb/s, using commercially available LED and PD (without optical blue filtering) and compact size.

  7. A predictive model of nuclear power plant crew decision-making and performance in a dynamic simulation environment

    NASA Astrophysics Data System (ADS)

    Coyne, Kevin Anthony

    The safe operation of complex systems such as nuclear power plants requires close coordination between the human operators and plant systems. In order to maintain an adequate level of safety following an accident or other off-normal event, the operators often are called upon to perform complex tasks during dynamic situations with incomplete information. The safety of such complex systems can be greatly improved if the conditions that could lead operators to make poor decisions and commit erroneous actions during these situations can be predicted and mitigated. The primary goal of this research project was the development and validation of a cognitive model capable of simulating nuclear plant operator decision-making during accident conditions. Dynamic probabilistic risk assessment methods can improve the prediction of human error events by providing rich contextual information and an explicit consideration of feedback arising from man-machine interactions. The Accident Dynamics Simulator paired with the Information, Decision, and Action in a Crew context cognitive model (ADS-IDAC) shows promise for predicting situational contexts that might lead to human error events, particularly knowledge driven errors of commission. ADS-IDAC generates a discrete dynamic event tree (DDET) by applying simple branching rules that reflect variations in crew responses to plant events and system status changes. Branches can be generated to simulate slow or fast procedure execution speed, skipping of procedure steps, reliance on memorized information, activation of mental beliefs, variations in control inputs, and equipment failures. Complex operator mental models of plant behavior that guide crew actions can be represented within the ADS-IDAC mental belief framework and used to identify situational contexts that may lead to human error events. This research increased the capabilities of ADS-IDAC in several key areas. The ADS-IDAC computer code was improved to support additional branching events and provide a better representation of the IDAC cognitive model. An operator decision-making engine capable of responding to dynamic changes in situational context was implemented. The IDAC human performance model was fully integrated with a detailed nuclear plant model in order to realistically simulate plant accident scenarios. Finally, the improved ADS-IDAC model was calibrated, validated, and updated using actual nuclear plant crew performance data. This research led to the following general conclusions: (1) A relatively small number of branching rules are capable of efficiently capturing a wide spectrum of crew-to-crew variabilities. (2) Compared to traditional static risk assessment methods, ADS-IDAC can provide a more realistic and integrated assessment of human error events by directly determining the effect of operator behaviors on plant thermal hydraulic parameters. (3) The ADS-IDAC approach provides an efficient framework for capturing actual operator performance data such as timing of operator actions, mental models, and decision-making activities.

  8. Human error in airway facilities.

    DOT National Transportation Integrated Search

    2001-01-01

    This report examines human errors in Airway Facilities (AF) with the intent of preventing these errors from being : passed on to the new Operations Control Centers. To effectively manage errors, they first have to be identified. : Human factors engin...

  9. Orders on file but no labs drawn: investigation of machine and human errors caused by an interface idiosyncrasy.

    PubMed

    Schreiber, Richard; Sittig, Dean F; Ash, Joan; Wright, Adam

    2017-09-01

    In this report, we describe 2 instances in which expert use of an electronic health record (EHR) system interfaced to an external clinical laboratory information system led to unintended consequences wherein 2 patients failed to have laboratory tests drawn in a timely manner. In both events, user actions combined with the lack of an acknowledgment message describing the order cancellation from the external clinical system were the root causes. In 1 case, rapid, near-simultaneous order entry was the culprit; in the second, astute order management by a clinician, unaware of the lack of proper 2-way interface messaging from the external clinical system, led to the confusion. Although testing had shown that the laboratory system would cancel duplicate laboratory orders, it was thought that duplicate alerting in the new order entry system would prevent such events. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. Author Correction: Segregation of mitochondrial DNA heteroplasmy through a developmental genetic bottleneck in human embryos.

    PubMed

    Floros, Vasileios I; Pyle, Angela; Dietmann, Sabine; Wei, Wei; Tang, Walfred W C; Irie, Naoko; Payne, Brendan; Capalbo, Antonio; Noli, Laila; Coxhead, Jonathan; Hudson, Gavin; Crosier, Moira; Strahl, Henrik; Khalaf, Yacoub; Saitou, Mitinori; Ilic, Dusko; Surani, M Azim; Chinnery, Patrick F

    2018-04-19

    In the version of this Letter originally published, an author error led to the affiliations for Brendan Payne, Jonathan Coxhead and Gavin Hudson being incorrect. The correct affiliations are: Brendan Payne: 3 Wellcome Trust Centre for Mitochondrial Research, Institute of Genetic Medicine, Newcastle University, Newcastle upon Tyne, UK. 6 Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, UK; this is a new affiliation 6 and subsequent existing affiliations have been renumbered. Jonathan Coxhead: 11 Genomic Core Facility, Institute of Genetic Medicine, Newcastle University, Newcastle upon Tyne, UK; this is a new affiliation 11 and subsequent existing affiliations have been renumbered. Gavin Hudson: 3 Wellcome Trust Centre for Mitochondrial Research, Institute of Genetic Medicine, Newcastle University, Newcastle upon Tyne, UK. In addition, in Fig. 2d, the numbers on the x-axis of the left plot were incorrectly labelled as negative; they should have been positive. These errors have now been corrected in all online versions of the Letter.

  11. Lessons from aviation - the role of checklists in minimally invasive cardiac surgery.

    PubMed

    Hussain, S; Adams, C; Cleland, A; Jones, P M; Walsh, G; Kiaii, B

    2016-01-01

    We describe an adverse event during minimally invasive cardiac surgery that resulted in a multi-disciplinary review of intra-operative errors and the creation of a procedural checklist. This checklist aims to prevent errors of omission and communication failures that result in increased morbidity and mortality. We discuss the application of the aviation - led "threats and errors model" to medical practice and the role of checklists and other strategies aimed at reducing medical errors. © The Author(s) 2015.

  12. Latent human error analysis and efficient improvement strategies by fuzzy TOPSIS in aviation maintenance tasks.

    PubMed

    Chiu, Ming-Chuan; Hsieh, Min-Chih

    2016-05-01

    The purposes of this study were to develop a latent human error analysis process, to explore the factors of latent human error in aviation maintenance tasks, and to provide an efficient improvement strategy for addressing those errors. First, we used HFACS and RCA to define the error factors related to aviation maintenance tasks. Fuzzy TOPSIS with four criteria was applied to evaluate the error factors. Results show that 1) adverse physiological states, 2) physical/mental limitations, and 3) coordination, communication, and planning are the factors related to airline maintenance tasks that could be addressed easily and efficiently. This research establishes a new analytic process for investigating latent human error and provides a strategy for analyzing human error using fuzzy TOPSIS. Our analysis process complements shortages in existing methodologies by incorporating improvement efficiency, and it enhances the depth and broadness of human error analysis methodology. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  13. Information systems and human error in the lab.

    PubMed

    Bissell, Michael G

    2004-01-01

    Health system costs in clinical laboratories are incurred daily due to human error. Indeed, a major impetus for automating clinical laboratories has always been the opportunity it presents to simultaneously reduce cost and improve quality of operations by decreasing human error. But merely automating these processes is not enough. To the extent that introduction of these systems results in operators having less practice in dealing with unexpected events or becoming deskilled in problemsolving, however new kinds of error will likely appear. Clinical laboratories could potentially benefit by integrating findings on human error from modern behavioral science into their operations. Fully understanding human error requires a deep understanding of human information processing and cognition. Predicting and preventing negative consequences requires application of this understanding to laboratory operations. Although the occurrence of a particular error at a particular instant cannot be absolutely prevented, human error rates can be reduced. The following principles are key: an understanding of the process of learning in relation to error; understanding the origin of errors since this knowledge can be used to reduce their occurrence; optimal systems should be forgiving to the operator by absorbing errors, at least for a time; although much is known by industrial psychologists about how to write operating procedures and instructions in ways that reduce the probability of error, this expertise is hardly ever put to use in the laboratory; and a feedback mechanism must be designed into the system that enables the operator to recognize in real time that an error has occurred.

  14. VLC-based indoor location awareness using LED light and image sensors

    NASA Astrophysics Data System (ADS)

    Lee, Seok-Ju; Yoo, Jong-Ho; Jung, Sung-Yoon

    2012-11-01

    Recently, indoor LED lighting can be considered for constructing green infra with energy saving and additionally providing LED-IT convergence services such as visible light communication (VLC) based location awareness and navigation services. For example, in case of large complex shopping mall, location awareness to navigate the destination is very important issue. However, the conventional navigation using GPS is not working indoors. Alternative location service based on WLAN has a problem that the position accuracy is low. For example, it is difficult to estimate the height exactly. If the position error of the height is greater than the height between floors, it may cause big problem. Therefore, conventional navigation is inappropriate for indoor navigation. Alternative possible solution for indoor navigation is VLC based location awareness scheme. Because indoor LED infra will be definitely equipped for providing lighting functionality, indoor LED lighting has a possibility to provide relatively high accuracy of position estimation combined with VLC technology. In this paper, we provide a new VLC based positioning system using visible LED lights and image sensors. Our system uses location of image sensor lens and location of reception plane. By using more than two image sensor, we can determine transmitter position less than 1m position error. Through simulation, we verify the validity of the proposed VLC based new positioning system using visible LED light and image sensors.

  15. Cost effectiveness of a pharmacist-led information technology intervention for reducing rates of clinically important errors in medicines management in general practices (PINCER).

    PubMed

    Elliott, Rachel A; Putman, Koen D; Franklin, Matthew; Annemans, Lieven; Verhaeghe, Nick; Eden, Martin; Hayre, Jasdeep; Rodgers, Sarah; Sheikh, Aziz; Avery, Anthony J

    2014-06-01

    We recently showed that a pharmacist-led information technology-based intervention (PINCER) was significantly more effective in reducing medication errors in general practices than providing simple feedback on errors, with cost per error avoided at £79 (US$131). We aimed to estimate cost effectiveness of the PINCER intervention by combining effectiveness in error reduction and intervention costs with the effect of the individual errors on patient outcomes and healthcare costs, to estimate the effect on costs and QALYs. We developed Markov models for each of six medication errors targeted by PINCER. Clinical event probability, treatment pathway, resource use and costs were extracted from literature and costing tariffs. A composite probabilistic model combined patient-level error models with practice-level error rates and intervention costs from the trial. Cost per extra QALY and cost-effectiveness acceptability curves were generated from the perspective of NHS England, with a 5-year time horizon. The PINCER intervention generated £2,679 less cost and 0.81 more QALYs per practice [incremental cost-effectiveness ratio (ICER): -£3,037 per QALY] in the deterministic analysis. In the probabilistic analysis, PINCER generated 0.001 extra QALYs per practice compared with simple feedback, at £4.20 less per practice. Despite this extremely small set of differences in costs and outcomes, PINCER dominated simple feedback with a mean ICER of -£3,936 (standard error £2,970). At a ceiling 'willingness-to-pay' of £20,000/QALY, PINCER reaches 59 % probability of being cost effective. PINCER produced marginal health gain at slightly reduced overall cost. Results are uncertain due to the poor quality of data to inform the effect of avoiding errors.

  16. Leveraging human oversight and intervention in large-scale parallel processing of open-source data

    NASA Astrophysics Data System (ADS)

    Casini, Enrico; Suri, Niranjan; Bradshaw, Jeffrey M.

    2015-05-01

    The popularity of cloud computing along with the increased availability of cheap storage have led to the necessity of elaboration and transformation of large volumes of open-source data, all in parallel. One way to handle such extensive volumes of information properly is to take advantage of distributed computing frameworks like Map-Reduce. Unfortunately, an entirely automated approach that excludes human intervention is often unpredictable and error prone. Highly accurate data processing and decision-making can be achieved by supporting an automatic process through human collaboration, in a variety of environments such as warfare, cyber security and threat monitoring. Although this mutual participation seems easily exploitable, human-machine collaboration in the field of data analysis presents several challenges. First, due to the asynchronous nature of human intervention, it is necessary to verify that once a correction is made, all the necessary reprocessing is done in chain. Second, it is often needed to minimize the amount of reprocessing in order to optimize the usage of resources due to limited availability. In order to improve on these strict requirements, this paper introduces improvements to an innovative approach for human-machine collaboration in the processing of large amounts of open-source data in parallel.

  17. System calibration method for Fourier ptychographic microscopy.

    PubMed

    Pan, An; Zhang, Yan; Zhao, Tianyu; Wang, Zhaojun; Dan, Dan; Lei, Ming; Yao, Baoli

    2017-09-01

    Fourier ptychographic microscopy (FPM) is a recently proposed computational imaging technique with both high-resolution and wide field of view. In current FPM imaging platforms, systematic error sources come from aberrations, light-emitting diode (LED) intensity fluctuation, parameter imperfections, and noise, all of which may severely corrupt the reconstruction results with similar artifacts. Therefore, it would be unlikely to distinguish the dominating error from these degraded reconstructions without any preknowledge. In addition, systematic error is generally a mixture of various error sources in the real situation, and it cannot be separated due to their mutual restriction and conversion. To this end, we report a system calibration procedure, termed SC-FPM, to calibrate the mixed systematic errors simultaneously from an overall perspective, based on the simulated annealing algorithm, the LED intensity correction method, the nonlinear regression process, and the adaptive step-size strategy, which involves the evaluation of an error metric at each iteration step, followed by the re-estimation of accurate parameters. The performance achieved both in simulations and experiments demonstrates that the proposed method outperforms other state-of-the-art algorithms. The reported system calibration scheme improves the robustness of FPM, relaxes the experiment conditions, and does not require any preknowledge, which makes the FPM more pragmatic. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  18. Improving Word Learning in Children Using an Errorless Technique

    ERIC Educational Resources Information Center

    Warmington, Meesha; Hitch, Graham J.; Gathercole, Susan E.

    2013-01-01

    The current experiment examined the relative advantage of an errorless learning technique over an errorful one in the acquisition of novel names for unfamiliar objects in typically developing children aged between 7 and 9 years. Errorless learning led to significantly better learning than did errorful learning. Processing speed and vocabulary…

  19. Stochastic Models of Human Errors

    NASA Technical Reports Server (NTRS)

    Elshamy, Maged; Elliott, Dawn M. (Technical Monitor)

    2002-01-01

    Humans play an important role in the overall reliability of engineering systems. More often accidents and systems failure are traced to human errors. Therefore, in order to have meaningful system risk analysis, the reliability of the human element must be taken into consideration. Describing the human error process by mathematical models is a key to analyzing contributing factors. Therefore, the objective of this research effort is to establish stochastic models substantiated by sound theoretic foundation to address the occurrence of human errors in the processing of the space shuttle.

  20. Effects of monetary reward and punishment on information checking behaviour: An eye-tracking study.

    PubMed

    Li, Simon Y W; Cox, Anna L; Or, Calvin; Blandford, Ann

    2018-07-01

    The aim of the present study was to investigate the effect of error consequence, as reward or punishment, on individuals' checking behaviour following data entry. This study comprised two eye-tracking experiments that replicate and extend the investigation of Li et al. (2016) into the effect of monetary reward and punishment on data-entry performance. The first experiment adopted the same experimental setup as Li et al. (2016) but additionally used an eye tracker. The experiment validated Li et al. (2016) finding that, when compared to no error consequence, both reward and punishment led to improved data-entry performance in terms of reducing errors, and that no performance difference was found between reward and punishment. The second experiment extended the earlier study by associating error consequence to each individual trial by providing immediate performance feedback to participants. It was found that gradual increment (i.e. reward feedback) also led to significantly more accurate performance than no error consequence. It is unclear whether gradual increment is more effective than gradual decrement because of the small sample size tested. However, this study reasserts the effectiveness of reward on data-entry performance. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Operational Interventions to Maintenance Error

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Dulchinos, VIcki

    1997-01-01

    A significant proportion of aviation accidents and incidents are known to be tied to human error. However, research of flight operational errors has shown that so-called pilot error often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the team7 concept for maintenance operations and in tailoring programs to fit the needs of technical opeRAtions. Nevertheless, there remains a dual challenge: 1) to develop human factors interventions which are directly supported by reliable human error data, and 2) to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  2. Reduction of Maintenance Error Through Focused Interventions

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Rosekind, Mark R. (Technical Monitor)

    1997-01-01

    It is well known that a significant proportion of aviation accidents and incidents are tied to human error. In flight operations, research of operational errors has shown that so-called "pilot error" often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the "team" concept for maintenance operations and in tailoring programs to fit the needs of technical operations. Nevertheless, there remains a dual challenge: to develop human factors interventions which are directly supported by reliable human error data, and to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  3. Biomineralization of a Self-assembled, Soft-Matrix Precursor: Enamel

    NASA Astrophysics Data System (ADS)

    Snead, Malcolm L.

    2015-04-01

    Enamel is the bioceramic covering of teeth, a composite tissue composed of hierarchical organized hydroxyapatite crystallites fabricated by cells under physiologic pH and temperature. Enamel material properties resist wear and fracture to serve a lifetime of chewing. Understanding the cellular and molecular mechanisms for enamel formation may allow a biology-inspired approach to material fabrication based on self-assembling proteins that control form and function. A genetic understanding of human diseases exposes insight from nature's errors by exposing critical fabrication events that can be validated experimentally and duplicated in mice using genetic engineering to phenocopy the human disease so that it can be explored in detail. This approach led to an assessment of amelogenin protein self-assembly that, when altered, disrupts fabrication of the soft enamel protein matrix. A misassembled protein matrix precursor results in loss of cell-to-matrix contacts essential to fabrication and mineralization.

  4. A first step toward understanding patient safety

    PubMed Central

    2016-01-01

    Patient safety has become an important policy agenda in healthcare systems since publication of the 1999 report entitled "To Err Is Human." The paradigm has changed from blaming the individual for the error to identifying the weakness in the system that led to the adverse events. Anesthesia is one of the first healthcare specialties to adopt techniques and lessons from the aviation industry. The widespread use of simulation programs and the application of human factors engineering to clinical practice are the influences of the aviation industry. Despite holding relatively advanced medical technology and comparable safety records, the Korean health industry has little understanding of the systems approach to patient safety. Because implementation of the existing system and program requires time, dedication, and financial support, the Korean healthcare industry is in urgent need of developing patient safety policies and putting them into practice to improve patient safety before it is too late. PMID:27703622

  5. Human Factors Process Task Analysis: Liquid Oxygen Pump Acceptance Test Procedure at the Advanced Technology Development Center

    NASA Technical Reports Server (NTRS)

    Diorio, Kimberly A.; Voska, Ned (Technical Monitor)

    2002-01-01

    This viewgraph presentation provides information on Human Factors Process Failure Modes and Effects Analysis (HF PFMEA). HF PFMEA includes the following 10 steps: Describe mission; Define System; Identify human-machine; List human actions; Identify potential errors; Identify factors that effect error; Determine likelihood of error; Determine potential effects of errors; Evaluate risk; Generate solutions (manage error). The presentation also describes how this analysis was applied to a liquid oxygen pump acceptance test.

  6. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1981-01-01

    Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  7. Optical digital to analog conversion performance analysis for indoor set-up conditions

    NASA Astrophysics Data System (ADS)

    Dobesch, Aleš; Alves, Luis Nero; Wilfert, Otakar; Ribeiro, Carlos Gaspar

    2017-10-01

    In visible light communication (VLC) the optical digital to analog conversion (ODAC) approach was proposed as a suitable driving technique able to overcome light-emitting diode's (LED) non-linear characteristic. This concept is analogous to an electrical digital-to-analog converter (EDAC). In other words, digital bits are binary weighted to represent an analog signal. The method supports elementary on-off based modulations able to exploit the essence of LED's non-linear characteristic allowing simultaneous lighting and communication. In the ODAC concept the reconstruction error does not simply rely upon the converter bit depth as in case of EDAC. It rather depends on communication system set-up and geometrical relation between emitter and receiver as well. The paper describes simulation results presenting the ODAC's error performance taking into account: the optical channel, the LED's half power angle (HPA) and the receiver field of view (FOV). The set-up under consideration examines indoor conditions for a square room with 4 m length and 3 m height, operating with one dominant wavelength (blue) and having walls with a reflection coefficient of 0.8. The achieved results reveal that reconstruction error increases for higher data rates as a result of interference due to multipath propagation.

  8. The contributions of human factors on human error in Malaysia aviation maintenance industries

    NASA Astrophysics Data System (ADS)

    Padil, H.; Said, M. N.; Azizan, A.

    2018-05-01

    Aviation maintenance is a multitasking activity in which individuals perform varied tasks under constant pressure to meet deadlines as well as challenging work conditions. These situational characteristics combined with human factors can lead to various types of human related errors. The primary objective of this research is to develop a structural relationship model that incorporates human factors, organizational factors, and their impact on human errors in aviation maintenance. Towards that end, a questionnaire was developed which was administered to Malaysian aviation maintenance professionals. Structural Equation Modelling (SEM) approach was used in this study utilizing AMOS software. Results showed that there were a significant relationship of human factors on human errors and were tested in the model. Human factors had a partial effect on organizational factors while organizational factors had a direct and positive impact on human errors. It was also revealed that organizational factors contributed to human errors when coupled with human factors construct. This study has contributed to the advancement of knowledge on human factors effecting safety and has provided guidelines for improving human factors performance relating to aviation maintenance activities and could be used as a reference for improving safety performance in the Malaysian aviation maintenance companies.

  9. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1980-01-01

    Human error, a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents is investigated. Correction of the sources of human error requires that one attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations is presented. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  10. Auditing as Part of the Terminology Design Life Cycle

    PubMed Central

    Min, Hua; Perl, Yehoshua; Chen, Yan; Halper, Michael; Geller, James; Wang, Yue

    2006-01-01

    Objective To develop and test an auditing methodology for detecting errors in medical terminologies satisfying systematic inheritance. This methodology is based on various abstraction taxonomies that provide high-level views of a terminology and highlight potentially erroneous concepts. Design Our auditing methodology is based on dividing concepts of a terminology into smaller, more manageable units. First, we divide the terminology’s concepts into areas according to their relationships/roles. Then each multi-rooted area is further divided into partial-areas (p-areas) that are singly-rooted. Each p-area contains a set of structurally and semantically uniform concepts. Two kinds of abstraction networks, called the area taxonomy and p-area taxonomy, are derived. These taxonomies form the basis for the auditing approach. Taxonomies tend to highlight potentially erroneous concepts in areas and p-areas. Human reviewers can focus their auditing efforts on the limited number of problematic concepts following two hypotheses on the probable concentration of errors. Results A sample of the area taxonomy and p-area taxonomy for the Biological Process (BP) hierarchy of the National Cancer Institute Thesaurus (NCIT) was derived from the application of our methodology to its concepts. These views led to the detection of a number of different kinds of errors that are reported, and to confirmation of the hypotheses on error concentration in this hierarchy. Conclusion Our auditing methodology based on area and p-area taxonomies is an efficient tool for detecting errors in terminologies satisfying systematic inheritance of roles, and thus facilitates their maintenance. This methodology concentrates a domain expert’s manual review on portions of the concepts with a high likelihood of errors. PMID:16929044

  11. Medication errors: a prospective cohort study of hand-written and computerised physician order entry in the intensive care unit.

    PubMed

    Shulman, Rob; Singer, Mervyn; Goldstone, John; Bellingan, Geoff

    2005-10-05

    The study aimed to compare the impact of computerised physician order entry (CPOE) without decision support with hand-written prescribing (HWP) on the frequency, type and outcome of medication errors (MEs) in the intensive care unit. Details of MEs were collected before, and at several time points after, the change from HWP to CPOE. The study was conducted in a London teaching hospital's 22-bedded general ICU. The sampling periods were 28 weeks before and 2, 10, 25 and 37 weeks after introduction of CPOE. The unit pharmacist prospectively recorded details of MEs and the total number of drugs prescribed daily during the data collection periods, during the course of his normal chart review. The total proportion of MEs was significantly lower with CPOE (117 errors from 2429 prescriptions, 4.8%) than with HWP (69 errors from 1036 prescriptions, 6.7%) (p < 0.04). The proportion of errors reduced with time following the introduction of CPOE (p < 0.001). Two errors with CPOE led to patient harm requiring an increase in length of stay and, if administered, three prescriptions with CPOE could potentially have led to permanent harm or death. Differences in the types of error between systems were noted. There was a reduction in major/moderate patient outcomes with CPOE when non-intercepted and intercepted errors were combined (p = 0.01). The mean baseline APACHE II score did not differ significantly between the HWP and the CPOE periods (19.4 versus 20.0, respectively, p = 0.71). Introduction of CPOE was associated with a reduction in the proportion of MEs and an improvement in the overall patient outcome score (if intercepted errors were included). Moderate and major errors, however, remain a significant concern with CPOE.

  12. Human Reliability and the Cost of Doing Business

    NASA Technical Reports Server (NTRS)

    DeMott, Diana

    2014-01-01

    Most businesses recognize that people will make mistakes and assume errors are just part of the cost of doing business, but does it need to be? Companies with high risk, or major consequences, should consider the effect of human error. In a variety of industries, Human Errors have caused costly failures and workplace injuries. These have included: airline mishaps, medical malpractice, administration of medication and major oil spills have all been blamed on human error. A technique to mitigate or even eliminate some of these costly human errors is the use of Human Reliability Analysis (HRA). Various methodologies are available to perform Human Reliability Assessments that range from identifying the most likely areas for concern to detailed assessments with human error failure probabilities calculated. Which methodology to use would be based on a variety of factors that would include: 1) how people react and act in different industries, and differing expectations based on industries standards, 2) factors that influence how the human errors could occur such as tasks, tools, environment, workplace, support, training and procedure, 3) type and availability of data and 4) how the industry views risk & reliability influences ( types of emergencies, contingencies and routine tasks versus cost based concerns). The Human Reliability Assessments should be the first step to reduce, mitigate or eliminate the costly mistakes or catastrophic failures. Using Human Reliability techniques to identify and classify human error risks allows a company more opportunities to mitigate or eliminate these risks and prevent costly failures.

  13. Human Reliability and the Cost of Doing Business

    NASA Technical Reports Server (NTRS)

    DeMott, D. L.

    2014-01-01

    Human error cannot be defined unambiguously in advance of it happening, it often becomes an error after the fact. The same action can result in a tragic accident for one situation or a heroic action given a more favorable outcome. People often forget that we employ humans in business and industry for the flexibility and capability to change when needed. In complex systems, operations are driven by their specifications of the system and the system structure. People provide the flexibility to make it work. Human error has been reported as being responsible for 60%-80% of failures, accidents and incidents in high-risk industries. We don't have to accept that all human errors are inevitable. Through the use of some basic techniques, many potential human error events can be addressed. There are actions that can be taken to reduce the risk of human error.

  14. Human Error Analysis in a Permit to Work System: A Case Study in a Chemical Plant

    PubMed Central

    Jahangiri, Mehdi; Hoboubi, Naser; Rostamabadi, Akbar; Keshavarzi, Sareh; Hosseini, Ali Akbar

    2015-01-01

    Background A permit to work (PTW) is a formal written system to control certain types of work which are identified as potentially hazardous. However, human error in PTW processes can lead to an accident. Methods This cross-sectional, descriptive study was conducted to estimate the probability of human errors in PTW processes in a chemical plant in Iran. In the first stage, through interviewing the personnel and studying the procedure in the plant, the PTW process was analyzed using the hierarchical task analysis technique. In doing so, PTW was considered as a goal and detailed tasks to achieve the goal were analyzed. In the next step, the standardized plant analysis risk-human (SPAR-H) reliability analysis method was applied for estimation of human error probability. Results The mean probability of human error in the PTW system was estimated to be 0.11. The highest probability of human error in the PTW process was related to flammable gas testing (50.7%). Conclusion The SPAR-H method applied in this study could analyze and quantify the potential human errors and extract the required measures for reducing the error probabilities in PTW system. Some suggestions to reduce the likelihood of errors, especially in the field of modifying the performance shaping factors and dependencies among tasks are provided. PMID:27014485

  15. Managing human fallibility in critical aerospace situations

    NASA Astrophysics Data System (ADS)

    Tew, Larry

    2014-11-01

    Human fallibility is pervasive in the aerospace industry with over 50% of errors attributed to human error. Consider the benefits to any organization if those errors were significantly reduced. Aerospace manufacturing involves high value, high profile systems with significant complexity and often repetitive build, assembly, and test operations. In spite of extensive analysis, planning, training, and detailed procedures, human factors can cause unexpected errors. Handling such errors involves extensive cause and corrective action analysis and invariably schedule slips and cost growth. We will discuss success stories, including those associated with electro-optical systems, where very significant reductions in human fallibility errors were achieved after receiving adapted and specialized training. In the eyes of company and customer leadership, the steps used to achieve these results lead to in a major culture change in both the workforce and the supporting management organization. This approach has proven effective in other industries like medicine, firefighting, law enforcement, and aviation. The roadmap to success and the steps to minimize human error are known. They can be used by any organization willing to accept human fallibility and take a proactive approach to incorporate the steps needed to manage and minimize error.

  16. Prediction of human errors by maladaptive changes in event-related brain networks.

    PubMed

    Eichele, Tom; Debener, Stefan; Calhoun, Vince D; Specht, Karsten; Engel, Andreas K; Hugdahl, Kenneth; von Cramon, D Yves; Ullsperger, Markus

    2008-04-22

    Humans engaged in monotonous tasks are susceptible to occasional errors that may lead to serious consequences, but little is known about brain activity patterns preceding errors. Using functional MRI and applying independent component analysis followed by deconvolution of hemodynamic responses, we studied error preceding brain activity on a trial-by-trial basis. We found a set of brain regions in which the temporal evolution of activation predicted performance errors. These maladaptive brain activity changes started to evolve approximately 30 sec before the error. In particular, a coincident decrease of deactivation in default mode regions of the brain, together with a decline of activation in regions associated with maintaining task effort, raised the probability of future errors. Our findings provide insights into the brain network dynamics preceding human performance errors and suggest that monitoring of the identified precursor states may help in avoiding human errors in critical real-world situations.

  17. Prediction of human errors by maladaptive changes in event-related brain networks

    PubMed Central

    Eichele, Tom; Debener, Stefan; Calhoun, Vince D.; Specht, Karsten; Engel, Andreas K.; Hugdahl, Kenneth; von Cramon, D. Yves; Ullsperger, Markus

    2008-01-01

    Humans engaged in monotonous tasks are susceptible to occasional errors that may lead to serious consequences, but little is known about brain activity patterns preceding errors. Using functional MRI and applying independent component analysis followed by deconvolution of hemodynamic responses, we studied error preceding brain activity on a trial-by-trial basis. We found a set of brain regions in which the temporal evolution of activation predicted performance errors. These maladaptive brain activity changes started to evolve ≈30 sec before the error. In particular, a coincident decrease of deactivation in default mode regions of the brain, together with a decline of activation in regions associated with maintaining task effort, raised the probability of future errors. Our findings provide insights into the brain network dynamics preceding human performance errors and suggest that monitoring of the identified precursor states may help in avoiding human errors in critical real-world situations. PMID:18427123

  18. Defining the Relationship Between Human Error Classes and Technology Intervention Strategies

    NASA Technical Reports Server (NTRS)

    Wiegmann, Douglas A.; Rantanen, Eas M.

    2003-01-01

    The modus operandi in addressing human error in aviation systems is predominantly that of technological interventions or fixes. Such interventions exhibit considerable variability both in terms of sophistication and application. Some technological interventions address human error directly while others do so only indirectly. Some attempt to eliminate the occurrence of errors altogether whereas others look to reduce the negative consequences of these errors. In any case, technological interventions add to the complexity of the systems and may interact with other system components in unforeseeable ways and often create opportunities for novel human errors. Consequently, there is a need to develop standards for evaluating the potential safety benefit of each of these intervention products so that resources can be effectively invested to produce the biggest benefit to flight safety as well as to mitigate any adverse ramifications. The purpose of this project was to help define the relationship between human error and technological interventions, with the ultimate goal of developing a set of standards for evaluating or measuring the potential benefits of new human error fixes.

  19. Applying lessons learned to enhance human performance and reduce human error for ISS operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, W.R.

    1999-01-01

    A major component of reliability, safety, and mission success for space missions is ensuring that the humans involved (flight crew, ground crew, mission control, etc.) perform their tasks and functions as required. This includes compliance with training and procedures during normal conditions, and successful compensation when malfunctions or unexpected conditions occur. A very significant issue that affects human performance in space flight is human error. Human errors can invalidate carefully designed equipment and procedures. If certain errors combine with equipment failures or design flaws, mission failure or loss of life can occur. The control of human error during operation ofmore » the International Space Station (ISS) will be critical to the overall success of the program. As experience from Mir operations has shown, human performance plays a vital role in the success or failure of long duration space missions. The Department of Energy{close_quote}s Idaho National Engineering and Environmental Laboratory (INEEL) is developing a systematic approach to enhance human performance and reduce human errors for ISS operations. This approach is based on the systematic identification and evaluation of lessons learned from past space missions such as Mir to enhance the design and operation of ISS. This paper will describe previous INEEL research on human error sponsored by NASA and how it can be applied to enhance human reliability for ISS. {copyright} {ital 1999 American Institute of Physics.}« less

  20. Applying lessons learned to enhance human performance and reduce human error for ISS operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, W.R.

    1998-09-01

    A major component of reliability, safety, and mission success for space missions is ensuring that the humans involved (flight crew, ground crew, mission control, etc.) perform their tasks and functions as required. This includes compliance with training and procedures during normal conditions, and successful compensation when malfunctions or unexpected conditions occur. A very significant issue that affects human performance in space flight is human error. Human errors can invalidate carefully designed equipment and procedures. If certain errors combine with equipment failures or design flaws, mission failure or loss of life can occur. The control of human error during operation ofmore » the International Space Station (ISS) will be critical to the overall success of the program. As experience from Mir operations has shown, human performance plays a vital role in the success or failure of long duration space missions. The Department of Energy`s Idaho National Engineering and Environmental Laboratory (INEEL) is developed a systematic approach to enhance human performance and reduce human errors for ISS operations. This approach is based on the systematic identification and evaluation of lessons learned from past space missions such as Mir to enhance the design and operation of ISS. This paper describes previous INEEL research on human error sponsored by NASA and how it can be applied to enhance human reliability for ISS.« less

  1. Sources of Error in Substance Use Prevalence Surveys

    PubMed Central

    Johnson, Timothy P.

    2014-01-01

    Population-based estimates of substance use patterns have been regularly reported now for several decades. Concerns with the quality of the survey methodologies employed to produce those estimates date back almost as far. Those concerns have led to a considerable body of research specifically focused on understanding the nature and consequences of survey-based errors in substance use epidemiology. This paper reviews and summarizes that empirical research by organizing it within a total survey error model framework that considers multiple types of representation and measurement errors. Gaps in our knowledge of error sources in substance use surveys and areas needing future research are also identified. PMID:27437511

  2. Structured methods for identifying and correcting potential human errors in aviation operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, W.R.

    1997-10-01

    Human errors have been identified as the source of approximately 60% of the incidents and accidents that occur in commercial aviation. It can be assumed that a very large number of human errors occur in aviation operations, even though in most cases the redundancies and diversities built into the design of aircraft systems prevent the errors from leading to serious consequences. In addition, when it is acknowledged that many system failures have their roots in human errors that occur in the design phase, it becomes apparent that the identification and elimination of potential human errors could significantly decrease the risksmore » of aviation operations. This will become even more critical during the design of advanced automation-based aircraft systems as well as next-generation systems for air traffic management. Structured methods to identify and correct potential human errors in aviation operations have been developed and are currently undergoing testing at the Idaho National Engineering and Environmental Laboratory (INEEL).« less

  3. Human errors and violations in computer and information security: the viewpoint of network administrators and security specialists.

    PubMed

    Kraemer, Sara; Carayon, Pascale

    2007-03-01

    This paper describes human errors and violations of end users and network administration in computer and information security. This information is summarized in a conceptual framework for examining the human and organizational factors contributing to computer and information security. This framework includes human error taxonomies to describe the work conditions that contribute adversely to computer and information security, i.e. to security vulnerabilities and breaches. The issue of human error and violation in computer and information security was explored through a series of 16 interviews with network administrators and security specialists. The interviews were audio taped, transcribed, and analyzed by coding specific themes in a node structure. The result is an expanded framework that classifies types of human error and identifies specific human and organizational factors that contribute to computer and information security. Network administrators tended to view errors created by end users as more intentional than unintentional, while errors created by network administrators as more unintentional than intentional. Organizational factors, such as communication, security culture, policy, and organizational structure, were the most frequently cited factors associated with computer and information security.

  4. Intervention strategies for the management of human error

    NASA Technical Reports Server (NTRS)

    Wiener, Earl L.

    1993-01-01

    This report examines the management of human error in the cockpit. The principles probably apply as well to other applications in the aviation realm (e.g. air traffic control, dispatch, weather, etc.) as well as other high-risk systems outside of aviation (e.g. shipping, high-technology medical procedures, military operations, nuclear power production). Management of human error is distinguished from error prevention. It is a more encompassing term, which includes not only the prevention of error, but also a means of disallowing an error, once made, from adversely affecting system output. Such techniques include: traditional human factors engineering, improvement of feedback and feedforward of information from system to crew, 'error-evident' displays which make erroneous input more obvious to the crew, trapping of errors within a system, goal-sharing between humans and machines (also called 'intent-driven' systems), paperwork management, and behaviorally based approaches, including procedures, standardization, checklist design, training, cockpit resource management, etc. Fifteen guidelines for the design and implementation of intervention strategies are included.

  5. Corrections of clinical chemistry test results in a laboratory information system.

    PubMed

    Wang, Sihe; Ho, Virginia

    2004-08-01

    The recently released reports by the Institute of Medicine, To Err Is Human and Patient Safety, have received national attention because of their focus on the problem of medical errors. Although a small number of studies have reported on errors in general clinical laboratories, there are, to our knowledge, no reported studies that focus on errors in pediatric clinical laboratory testing. To characterize the errors that have caused corrections to have to be made in pediatric clinical chemistry results in the laboratory information system, Misys. To provide initial data on the errors detected in pediatric clinical chemistry laboratories in order to improve patient safety in pediatric health care. All clinical chemistry staff members were informed of the study and were requested to report in writing when a correction was made in the laboratory information system, Misys. Errors were detected either by the clinicians (the results did not fit the patients' clinical conditions) or by the laboratory technologists (the results were double-checked, and the worksheets were carefully examined twice a day). No incident that was discovered before or during the final validation was included. On each Monday of the study, we generated a report from Misys that listed all of the corrections made during the previous week. We then categorized the corrections according to the types and stages of the incidents that led to the corrections. A total of 187 incidents were detected during the 10-month study, representing a 0.26% error detection rate per requisition. The distribution of the detected incidents included 31 (17%) preanalytic incidents, 46 (25%) analytic incidents, and 110 (59%) postanalytic incidents. The errors related to noninterfaced tests accounted for 50% of the total incidents and for 37% of the affected tests and orderable panels, while the noninterfaced tests and panels accounted for 17% of the total test volume in our laboratory. This pilot study provided the rate and categories of errors detected in a pediatric clinical chemistry laboratory based on the corrections of results in the laboratory information system. A direct interface of the instruments to the laboratory information system showed that it had favorable effects on reducing laboratory errors.

  6. Exploring Reactions to Pilot Reliability Certification and Changing Attitudes on the Reduction of Errors

    ERIC Educational Resources Information Center

    Boedigheimer, Dan

    2010-01-01

    Approximately 70% of aviation accidents are attributable to human error. The greatest opportunity for further improving aviation safety is found in reducing human errors in the cockpit. The purpose of this quasi-experimental, mixed-method research was to evaluate whether there was a difference in pilot attitudes toward reducing human error in the…

  7. Evaluating a medical error taxonomy.

    PubMed

    Brixey, Juliana; Johnson, Todd R; Zhang, Jiajie

    2002-01-01

    Healthcare has been slow in using human factors principles to reduce medical errors. The Center for Devices and Radiological Health (CDRH) recognizes that a lack of attention to human factors during product development may lead to errors that have the potential for patient injury, or even death. In response to the need for reducing medication errors, the National Coordinating Council for Medication Errors Reporting and Prevention (NCC MERP) released the NCC MERP taxonomy that provides a standard language for reporting medication errors. This project maps the NCC MERP taxonomy of medication error to MedWatch medical errors involving infusion pumps. Of particular interest are human factors associated with medical device errors. The NCC MERP taxonomy of medication errors is limited in mapping information from MEDWATCH because of the focus on the medical device and the format of reporting.

  8. An Evaluation of Departmental Radiation Oncology Incident Reports: Anticipating a National Reporting System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terezakis, Stephanie A., E-mail: stereza1@jhmi.edu; Harris, Kendra M.; Ford, Eric

    Purpose: Systems to ensure patient safety are of critical importance. The electronic incident reporting systems (IRS) of 2 large academic radiation oncology departments were evaluated for events that may be suitable for submission to a national reporting system (NRS). Methods and Materials: All events recorded in the combined IRS were evaluated from 2007 through 2010. Incidents were graded for potential severity using the validated French Nuclear Safety Authority (ASN) 5-point scale. These incidents were categorized into 7 groups: (1) human error, (2) software error, (3) hardware error, (4) error in communication between 2 humans, (5) error at the human-software interface,more » (6) error at the software-hardware interface, and (7) error at the human-hardware interface. Results: Between the 2 systems, 4407 incidents were reported. Of these events, 1507 (34%) were considered to have the potential for clinical consequences. Of these 1507 events, 149 (10%) were rated as having a potential severity of ≥2. Of these 149 events, the committee determined that 79 (53%) of these events would be submittable to a NRS of which the majority was related to human error or to the human-software interface. Conclusions: A significant number of incidents were identified in this analysis. The majority of events in this study were related to human error and to the human-software interface, further supporting the need for a NRS to facilitate field-wide learning and system improvement.« less

  9. Sleep quality, posttraumatic stress, depression, and human errors in train drivers: a population-based nationwide study in South Korea.

    PubMed

    Jeon, Hong Jin; Kim, Ji-Hae; Kim, Bin-Na; Park, Seung Jin; Fava, Maurizio; Mischoulon, David; Kang, Eun-Ho; Roh, Sungwon; Lee, Dongsoo

    2014-12-01

    Human error is defined as an unintended error that is attributable to humans rather than machines, and that is important to avoid to prevent accidents. We aimed to investigate the association between sleep quality and human errors among train drivers. Cross-sectional. Population-based. A sample of 5,480 subjects who were actively working as train drivers were recruited in South Korea. The participants were 4,634 drivers who completed all questionnaires (response rate 84.6%). None. The Pittsburgh Sleep Quality Index (PSQI), the Center for Epidemiologic Studies Depression Scale (CES-D), the Impact of Event Scale-Revised (IES-R), the State-Trait Anxiety Inventory (STAI), and the Korean Occupational Stress Scale (KOSS). Of 4,634 train drivers, 349 (7.5%) showed more than one human error per 5 y. Human errors were associated with poor sleep quality, higher PSQI total scores, short sleep duration at night, and longer sleep latency. Among train drivers with poor sleep quality, those who experienced severe posttraumatic stress showed a significantly higher number of human errors than those without. Multiple logistic regression analysis showed that human errors were significantly associated with poor sleep quality and posttraumatic stress, whereas there were no significant associations with depression, trait and state anxiety, and work stress after adjusting for age, sex, education years, marital status, and career duration. Poor sleep quality was found to be associated with more human errors in train drivers, especially in those who experienced severe posttraumatic stress. © 2014 Associated Professional Sleep Societies, LLC.

  10. Analyzing human errors in flight mission operations

    NASA Technical Reports Server (NTRS)

    Bruno, Kristin J.; Welz, Linda L.; Barnes, G. Michael; Sherif, Josef

    1993-01-01

    A long-term program is in progress at JPL to reduce cost and risk of flight mission operations through a defect prevention/error management program. The main thrust of this program is to create an environment in which the performance of the total system, both the human operator and the computer system, is optimized. To this end, 1580 Incident Surprise Anomaly reports (ISA's) from 1977-1991 were analyzed from the Voyager and Magellan projects. A Pareto analysis revealed that 38 percent of the errors were classified as human errors. A preliminary cluster analysis based on the Magellan human errors (204 ISA's) is presented here. The resulting clusters described the underlying relationships among the ISA's. Initial models of human error in flight mission operations are presented. Next, the Voyager ISA's will be scored and included in the analysis. Eventually, these relationships will be used to derive a theoretically motivated and empirically validated model of human error in flight mission operations. Ultimately, this analysis will be used to make continuous process improvements continuous process improvements to end-user applications and training requirements. This Total Quality Management approach will enable the management and prevention of errors in the future.

  11. Dissociable contribution of the parietal and frontal cortex to coding movement direction and amplitude

    PubMed Central

    Davare, Marco; Zénon, Alexandre; Desmurget, Michel; Olivier, Etienne

    2015-01-01

    To reach for an object, we must convert its spatial location into an appropriate motor command, merging movement direction and amplitude. In humans, it has been suggested that this visuo-motor transformation occurs in a dorsomedial parieto-frontal pathway, although the causal contribution of the areas constituting the “reaching circuit” remains unknown. Here we used transcranial magnetic stimulation (TMS) in healthy volunteers to disrupt the function of either the medial intraparietal area (mIPS) or dorsal premotor cortex (PMd), in each hemisphere. The task consisted in performing step-tracking movements with the right wrist towards targets located in different directions and eccentricities; targets were either visible for the whole trial (Target-ON) or flashed for 200 ms (Target-OFF). Left and right mIPS disruption led to errors in the initial direction of movements performed towards contralateral targets. These errors were corrected online in the Target-ON condition but when the target was flashed for 200 ms, mIPS TMS manifested as a larger endpoint spreading. In contrast, left PMd virtual lesions led to higher acceleration and velocity peaks—two parameters typically used to probe the planned movement amplitude—irrespective of the target position, hemifield and presentation condition; in the Target-OFF condition, left PMd TMS induced overshooting and increased the endpoint dispersion along the axis of the target direction. These results indicate that left PMd intervenes in coding amplitude during movement preparation. The critical TMS timings leading to errors in direction and amplitude were different, namely 160–100 ms before movement onset for mIPS and 100–40 ms for left PMd. TMS applied over right PMd had no significant effect. These results demonstrate that, during motor preparation, direction and amplitude of goal-directed movements are processed by different cortical areas, at distinct timings, and according to a specific hemispheric organization. PMID:25999837

  12. Human Error and the International Space Station: Challenges and Triumphs in Science Operations

    NASA Technical Reports Server (NTRS)

    Harris, Samantha S.; Simpson, Beau C.

    2016-01-01

    Any system with a human component is inherently risky. Studies in human factors and psychology have repeatedly shown that human operators will inevitably make errors, regardless of how well they are trained. Onboard the International Space Station (ISS) where crew time is arguably the most valuable resource, errors by the crew or ground operators can be costly to critical science objectives. Operations experts at the ISS Payload Operations Integration Center (POIC), located at NASA's Marshall Space Flight Center in Huntsville, Alabama, have learned that from payload concept development through execution, there are countless opportunities to introduce errors that can potentially result in costly losses of crew time and science. To effectively address this challenge, we must approach the design, testing, and operation processes with two specific goals in mind. First, a systematic approach to error and human centered design methodology should be implemented to minimize opportunities for user error. Second, we must assume that human errors will be made and enable rapid identification and recoverability when they occur. While a systematic approach and human centered development process can go a long way toward eliminating error, the complete exclusion of operator error is not a reasonable expectation. The ISS environment in particular poses challenging conditions, especially for flight controllers and astronauts. Operating a scientific laboratory 250 miles above the Earth is a complicated and dangerous task with high stakes and a steep learning curve. While human error is a reality that may never be fully eliminated, smart implementation of carefully chosen tools and techniques can go a long way toward minimizing risk and increasing the efficiency of NASA's space science operations.

  13. Retraction notice to "The Palaeocene Cerro Munro tonalite intrusion (Chubut Province, Argentina): A plutonic remnant of explosive volcanism?"[J. S. Am. Earth Sci. 78C 38-60

    NASA Astrophysics Data System (ADS)

    Rodríguez, C.; Aragón, E.; Castro, A.; Pedreira, R.; Sánchez-Navas, A.; Díaz-Alvarado, J.; D´Eramo, F.; Pinotti, L.; Aguilera, Y.; Cavarozzi, C.; Demartis, M.; Hernando, I. R.; Fuentes, T.

    2017-10-01

    The publisher regrets that an error occurred which led to the premature publication of this paper. This error bears no reflection on the article or its authors. The publisher apologizes to the authors and the readers for this unfortunate error in Journal of South American Earth Sciences, 78C (2017) 38-60, http://dx.doi.org/10.1016/j.jsames.2017.06.002.

  14. Evaluation of Argos Telemetry Accuracy in the High-Arctic and Implications for the Estimation of Home-Range Size

    PubMed Central

    Christin, Sylvain; St-Laurent, Martin-Hugues; Berteaux, Dominique

    2015-01-01

    Animal tracking through Argos satellite telemetry has enormous potential to test hypotheses in animal behavior, evolutionary ecology, or conservation biology. Yet the applicability of this technique cannot be fully assessed because no clear picture exists as to the conditions influencing the accuracy of Argos locations. Latitude, type of environment, and transmitter movement are among the main candidate factors affecting accuracy. A posteriori data filtering can remove “bad” locations, but again testing is still needed to refine filters. First, we evaluate experimentally the accuracy of Argos locations in a polar terrestrial environment (Nunavut, Canada), with both static and mobile transmitters transported by humans and coupled to GPS transmitters. We report static errors among the lowest published. However, the 68th error percentiles of mobile transmitters were 1.7 to 3.8 times greater than those of static transmitters. Second, we test how different filtering methods influence the quality of Argos location datasets. Accuracy of location datasets was best improved when filtering in locations of the best classes (LC3 and 2), while the Douglas Argos filter and a homemade speed filter yielded similar performance while retaining more locations. All filters effectively reduced the 68th error percentiles. Finally, we assess how location error impacted, at six spatial scales, two common estimators of home-range size (a proxy of animal space use behavior synthetizing movements), the minimum convex polygon and the fixed kernel estimator. Location error led to a sometimes dramatic overestimation of home-range size, especially at very local scales. We conclude that Argos telemetry is appropriate to study medium-size terrestrial animals in polar environments, but recommend that location errors are always measured and evaluated against research hypotheses, and that data are always filtered before analysis. How movement speed of transmitters affects location error needs additional research. PMID:26545245

  15. Modeling human response errors in synthetic flight simulator domain

    NASA Technical Reports Server (NTRS)

    Ntuen, Celestine A.

    1992-01-01

    This paper presents a control theoretic approach to modeling human response errors (HRE) in the flight simulation domain. The human pilot is modeled as a supervisor of a highly automated system. The synthesis uses the theory of optimal control pilot modeling for integrating the pilot's observation error and the error due to the simulation model (experimental error). Methods for solving the HRE problem are suggested. Experimental verification of the models will be tested in a flight quality handling simulation.

  16. Explanation Capabilities for Behavior-Based Robot Control

    NASA Technical Reports Server (NTRS)

    Huntsberger, Terrance L.

    2012-01-01

    A recent study that evaluated issues associated with remote interaction with an autonomous vehicle within the framework of grounding found that missing contextual information led to uncertainty in the interpretation of collected data, and so introduced errors into the command logic of the vehicle. As the vehicles became more autonomous through the activation of additional capabilities, more errors were made. This is an inefficient use of the platform, since the behavior of remotely located autonomous vehicles didn't coincide with the "mental models" of human operators. One of the conclusions of the study was that there should be a way for the autonomous vehicles to describe what action they choose and why. Robotic agents with enough self-awareness to dynamically adjust the information conveyed back to the Operations Center based on a detail level component analysis of requests could provide this description capability. One way to accomplish this is to map the behavior base of the robot into a formal mathematical framework called a cost-calculus. A cost-calculus uses composition operators to build up sequences of behaviors that can then be compared to what is observed using well-known inference mechanisms.

  17. PrimPol prevents APOBEC/AID family mediated DNA mutagenesis

    PubMed Central

    Pilzecker, Bas; Buoninfante, Olimpia Alessandra; Pritchard, Colin; Blomberg, Olga S.; Huijbers, Ivo J.; van den Berk, Paul C.M.; Jacobs, Heinz

    2016-01-01

    Abstract PrimPol is a DNA damage tolerant polymerase displaying both translesion synthesis (TLS) and (re)-priming properties. This led us to study the consequences of a PrimPol deficiency in tolerating mutagenic lesions induced by members of the APOBEC/AID family of cytosine deaminases. Interestingly, during somatic hypermutation, PrimPol counteracts the generation of C>G transversions on the leading strand. Independently, mutation analyses in human invasive breast cancer confirmed a pro-mutagenic activity of APOBEC3B and revealed a genome-wide anti-mutagenic activity of PRIMPOL as well as most Y-family TLS polymerases. PRIMPOL especially prevents APOBEC3B targeted cytosine mutations within TpC dinucleotides. As C transversions induced by APOBEC/AID family members depend on the formation of AP-sites, we propose that PrimPol reprimes preferentially downstream of AP-sites on the leading strand, to prohibit error-prone TLS and simultaneously stimulate error-free homology directed repair. These in vivo studies are the first demonstrating a critical anti-mutagenic activity of PrimPol in genome maintenance. PMID:26926109

  18. Defining the Relationship Between Human Error Classes and Technology Intervention Strategies

    NASA Technical Reports Server (NTRS)

    Wiegmann, Douglas A.; Rantanen, Esa; Crisp, Vicki K. (Technical Monitor)

    2002-01-01

    One of the main factors in all aviation accidents is human error. The NASA Aviation Safety Program (AvSP), therefore, has identified several human-factors safety technologies to address this issue. Some technologies directly address human error either by attempting to reduce the occurrence of errors or by mitigating the negative consequences of errors. However, new technologies and system changes may also introduce new error opportunities or even induce different types of errors. Consequently, a thorough understanding of the relationship between error classes and technology "fixes" is crucial for the evaluation of intervention strategies outlined in the AvSP, so that resources can be effectively directed to maximize the benefit to flight safety. The purpose of the present project, therefore, was to examine the repositories of human factors data to identify the possible relationship between different error class and technology intervention strategies. The first phase of the project, which is summarized here, involved the development of prototype data structures or matrices that map errors onto "fixes" (and vice versa), with the hope of facilitating the development of standards for evaluating safety products. Possible follow-on phases of this project are also discussed. These additional efforts include a thorough and detailed review of the literature to fill in the data matrix and the construction of a complete database and standards checklists.

  19. Development and implementation of a human accuracy program in patient foodservice.

    PubMed

    Eden, S H; Wood, S M; Ptak, K M

    1987-04-01

    For many years, industry has utilized the concept of human error rates to monitor and minimize human errors in the production process. A consistent quality-controlled product increases consumer satisfaction and repeat purchase of product. Administrative dietitians have applied the concepts of using human error rates (the number of errors divided by the number of opportunities for error) at four hospitals, with a total bed capacity of 788, within a tertiary-care medical center. Human error rate was used to monitor and evaluate trayline employee performance and to evaluate layout and tasks of trayline stations, in addition to evaluating employees in patient service areas. Long-term employees initially opposed the error rate system with some hostility and resentment, while newer employees accepted the system. All employees now believe that the constant feedback given by supervisors enhances their self-esteem and productivity. Employee error rates are monitored daily and are used to counsel employees when necessary; they are also utilized during annual performance evaluation. Average daily error rates for a facility staffed by new employees decreased from 7% to an acceptable 3%. In a facility staffed by long-term employees, the error rate increased, reflecting improper error documentation. Patient satisfaction surveys reveal satisfaction, for tray accuracy increased from 88% to 92% in the facility staffed by long-term employees and has remained above the 90% standard in the facility staffed by new employees.

  20. Reflections on human error - Matters of life and death

    NASA Technical Reports Server (NTRS)

    Wiener, Earl L.

    1989-01-01

    The last two decades have witnessed a rapid growth in the introduction of automatic devices into aircraft cockpits, and eleswhere in human-machine systems. This was motivated in part by the assumption that when human functioning is replaced by machine functioning, human error is eliminated. Experience to date shows that this is far from true, and that automation does not replace humans, but changes their role in the system, as well as the types and severity of the errors they make. This altered role may lead to fewer, but more critical errors. Intervention strategies to prevent these errors, or ameliorate their consequences include basic human factors engineering of the interface, enhanced warning and alerting systems, and more intelligent interfaces that understand the strategic intent of the crew and can detect and trap inconsistent or erroneous input before it affects the system.

  1. A stochastic dynamic model for human error analysis in nuclear power plants

    NASA Astrophysics Data System (ADS)

    Delgado-Loperena, Dharma

    Nuclear disasters like Three Mile Island and Chernobyl indicate that human performance is a critical safety issue, sending a clear message about the need to include environmental press and competence aspects in research. This investigation was undertaken to serve as a roadmap for studying human behavior through the formulation of a general solution equation. The theoretical model integrates models from two heretofore-disassociated disciplines (behavior specialists and technical specialists), that historically have independently studied the nature of error and human behavior; including concepts derived from fractal and chaos theory; and suggests re-evaluation of base theory regarding human error. The results of this research were based on comprehensive analysis of patterns of error, with the omnipresent underlying structure of chaotic systems. The study of patterns lead to a dynamic formulation, serving for any other formula used to study human error consequences. The search for literature regarding error yielded insight for the need to include concepts rooted in chaos theory and strange attractors---heretofore unconsidered by mainstream researchers who investigated human error in nuclear power plants or those who employed the ecological model in their work. The study of patterns obtained from the rupture of a steam generator tube (SGTR) event simulation, provided a direct application to aspects of control room operations in nuclear power plant operations. In doing so, the conceptual foundation based in the understanding of the patterns of human error analysis can be gleaned, resulting in reduced and prevent undesirable events.

  2. 4.5-Gb/s RGB-LED based WDM visible light communication system employing CAP modulation and RLS based adaptive equalization.

    PubMed

    Wang, Yiguang; Huang, Xingxing; Tao, Li; Shi, Jianyang; Chi, Nan

    2015-05-18

    Inter-symbol interference (ISI) is one of the key problems that seriously limit transmission data rate in high-speed VLC systems. To eliminate ISI and further improve the system performance, series of equalization schemes have been widely investigated. As an adaptive algorithm commonly used in wireless communication, RLS is also suitable for visible light communication due to its quick convergence and better performance. In this paper, for the first time we experimentally demonstrate a high-speed RGB-LED based WDM VLC system employing carrier-less amplitude and phase (CAP) modulation and recursive least square (RLS) based adaptive equalization. An aggregate data rate of 4.5Gb/s is successfully achieved over 1.5-m indoor free space transmission with the bit error rate (BER) below the 7% forward error correction (FEC) limit of 3.8x10(-3). To the best of our knowledge, this is the highest data rate ever achieved in RGB-LED based VLC systems.

  3. Piezo-Phototronic Effect Controlled Dual-Channel Visible light Communication (PVLC) Using InGaN/GaN Multiquantum Well Nanopillars.

    PubMed

    Du, Chunhua; Jiang, Chunyan; Zuo, Peng; Huang, Xin; Pu, Xiong; Zhao, Zhenfu; Zhou, Yongli; Li, Linxuan; Chen, Hong; Hu, Weiguo; Wang, Zhong Lin

    2015-12-02

    Visible light communication (VLC) simultaneously provides illumination and communication via light emitting diodes (LEDs). Keeping a low bit error rate is essential to communication quality, and holding a stable brightness level is pivotal for illumination function. For the first time, a piezo-phototronic effect controlled visible light communication (PVLC) system based on InGaN/GaN multiquantum wells nanopillars is demonstrated, in which the information is coded by mechanical straining. This approach of force coding is also instrumental to avoid LED blinks, which has less impact on illumination and is much safer to eyes than electrical on/off VLC. The two-channel transmission mode of the system here shows great superiority in error self-validation and error self-elimination in comparison to VLC. This two-channel PVLC system provides a suitable way to carry out noncontact, reliable communication under complex circumstances. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Investigating the influence of LiDAR ground surface errors on the utility of derived forest inventories

    Treesearch

    Wade T. Tinkham; Alistair M. S. Smith; Chad Hoffman; Andrew T. Hudak; Michael J. Falkowski; Mark E. Swanson; Paul E. Gessler

    2012-01-01

    Light detection and ranging, or LiDAR, effectively produces products spatially characterizing both terrain and vegetation structure; however, development and use of those products has outpaced our understanding of the errors within them. LiDAR's ability to capture three-dimensional structure has led to interest in conducting or augmenting forest inventories with...

  5. Design of a detection system of highlight LED arrays' effect on the human organization

    NASA Astrophysics Data System (ADS)

    Chen, Shuwang; Shi, Guiju; Xue, Tongze; Liu, Yanming

    2009-05-01

    LED (Light Emitting Diode) has many advantages in the intensity, wavelength, practicality and price, so it is feasible to apply in biomedicine engineering. A system for the research on the effect of highlight LED arrays to human organization is designed. The temperature of skin surface can rise if skin and organization are in irradiation by highlight LED arrays. The metabolism and blood circulation of corresponding position will be quicker than those not in the shine, so the surface temperature will vary in different position of skin. The structure of LED source arrays system is presented and a measure system for studying LED's influence on human organization is designed. The temperature values of shining point are detected by infrared temperature detector. Temperature change is different according to LED parameters, such as the number, irradiation time and luminous intensity of LED. Experimental device is designed as an LED arrays pen. The LED arrays device is used to shine the points of human body, then it may effect on personal organization as well as the acupuncture. The system is applied in curing a certain skin disease, such as age pigment, skin cancer and fleck.

  6. Human factors process failure modes and effects analysis (HF PFMEA) software tool

    NASA Technical Reports Server (NTRS)

    Chandler, Faith T. (Inventor); Relvini, Kristine M. (Inventor); Shedd, Nathaneal P. (Inventor); Valentino, William D. (Inventor); Philippart, Monica F. (Inventor); Bessette, Colette I. (Inventor)

    2011-01-01

    Methods, computer-readable media, and systems for automatically performing Human Factors Process Failure Modes and Effects Analysis for a process are provided. At least one task involved in a process is identified, where the task includes at least one human activity. The human activity is described using at least one verb. A human error potentially resulting from the human activity is automatically identified, the human error is related to the verb used in describing the task. A likelihood of occurrence, detection, and correction of the human error is identified. The severity of the effect of the human error is identified. The likelihood of occurrence, and the severity of the risk of potential harm is identified. The risk of potential harm is compared with a risk threshold to identify the appropriateness of corrective measures.

  7. Air Force Academy Homepage

    Science.gov Websites

    Chaplain Corps Cadet Chapel Community Center Chapel Institutional Review Board Not Human Subjects Research Requirements 7 Not Human Subjects Research Form 8 Researcher Instructions - Activities Submitted to DoD IRB 9 Review 18 Not Human Subjects Errors 19 Exempt Research Most Frequent Errors 20 Most Frequent Errors for

  8. Development of an FAA-EUROCONTROL technique for the analysis of human error in ATM : final report.

    DOT National Transportation Integrated Search

    2002-07-01

    Human error has been identified as a dominant risk factor in safety-oriented industries such as air traffic control (ATC). However, little is known about the factors leading to human errors in current air traffic management (ATM) systems. The first s...

  9. Human Error: The Stakes Are Raised.

    ERIC Educational Resources Information Center

    Greenberg, Joel

    1980-01-01

    Mistakes related to the operation of nuclear power plants and other technologically complex systems are discussed. Recommendations are given for decreasing the chance of human error in the operation of nuclear plants. The causes of the Three Mile Island incident are presented in terms of the human error element. (SA)

  10. Avoiding Human Error in Mission Operations: Cassini Flight Experience

    NASA Technical Reports Server (NTRS)

    Burk, Thomas A.

    2012-01-01

    Operating spacecraft is a never-ending challenge and the risk of human error is ever- present. Many missions have been significantly affected by human error on the part of ground controllers. The Cassini mission at Saturn has not been immune to human error, but Cassini operations engineers use tools and follow processes that find and correct most human errors before they reach the spacecraft. What is needed are skilled engineers with good technical knowledge, good interpersonal communications, quality ground software, regular peer reviews, up-to-date procedures, as well as careful attention to detail and the discipline to test and verify all commands that will be sent to the spacecraft. Two areas of special concern are changes to flight software and response to in-flight anomalies. The Cassini team has a lot of practical experience in all these areas and they have found that well-trained engineers with good tools who follow clear procedures can catch most errors before they get into command sequences to be sent to the spacecraft. Finally, having a robust and fault-tolerant spacecraft that allows ground controllers excellent visibility of its condition is the most important way to ensure human error does not compromise the mission.

  11. Good people who try their best can have problems: recognition of human factors and how to minimise error.

    PubMed

    Brennan, Peter A; Mitchell, David A; Holmes, Simon; Plint, Simon; Parry, David

    2016-01-01

    Human error is as old as humanity itself and is an appreciable cause of mistakes by both organisations and people. Much of the work related to human factors in causing error has originated from aviation where mistakes can be catastrophic not only for those who contribute to the error, but for passengers as well. The role of human error in medical and surgical incidents, which are often multifactorial, is becoming better understood, and includes both organisational issues (by the employer) and potential human factors (at a personal level). Mistakes as a result of individual human factors and surgical teams should be better recognised and emphasised. Attitudes and acceptance of preoperative briefing has improved since the introduction of the World Health Organization (WHO) surgical checklist. However, this does not address limitations or other safety concerns that are related to performance, such as stress and fatigue, emotional state, hunger, awareness of what is going on situational awareness, and other factors that could potentially lead to error. Here we attempt to raise awareness of these human factors, and highlight how they can lead to error, and how they can be minimised in our day-to-day practice. Can hospitals move from being "high risk industries" to "high reliability organisations"? Copyright © 2015 The British Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  12. Using a Delphi Method to Identify Human Factors Contributing to Nursing Errors.

    PubMed

    Roth, Cheryl; Brewer, Melanie; Wieck, K Lynn

    2017-07-01

    The purpose of this study was to identify human factors associated with nursing errors. Using a Delphi technique, this study used feedback from a panel of nurse experts (n = 25) on an initial qualitative survey questionnaire followed by summarizing the results with feedback and confirmation. Synthesized factors regarding causes of errors were incorporated into a quantitative Likert-type scale, and the original expert panel participants were queried a second time to validate responses. The list identified 24 items as most common causes of nursing errors, including swamping and errors made by others that nurses are expected to recognize and fix. The responses provided a consensus top 10 errors list based on means with heavy workload and fatigue at the top of the list. The use of the Delphi survey established consensus and developed a platform upon which future study of nursing errors can evolve as a link to future solutions. This list of human factors in nursing errors should serve to stimulate dialogue among nurses about how to prevent errors and improve outcomes. Human and system failures have been the subject of an abundance of research, yet nursing errors continue to occur. © 2016 Wiley Periodicals, Inc.

  13. FRamework Assessing Notorious Contributing Influences for Error (FRANCIE): Perspective on Taxonomy Development to Support Error Reporting and Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lon N. Haney; David I. Gertman

    2003-04-01

    Beginning in the 1980s a primary focus of human reliability analysis was estimation of human error probabilities. However, detailed qualitative modeling with comprehensive representation of contextual variables often was lacking. This was likely due to the lack of comprehensive error and performance shaping factor taxonomies, and the limited data available on observed error rates and their relationship to specific contextual variables. In the mid 90s Boeing, America West Airlines, NASA Ames Research Center and INEEL partnered in a NASA sponsored Advanced Concepts grant to: assess the state of the art in human error analysis, identify future needs for human errormore » analysis, and develop an approach addressing these needs. Identified needs included the need for a method to identify and prioritize task and contextual characteristics affecting human reliability. Other needs identified included developing comprehensive taxonomies to support detailed qualitative modeling and to structure meaningful data collection efforts across domains. A result was the development of the FRamework Assessing Notorious Contributing Influences for Error (FRANCIE) with a taxonomy for airline maintenance tasks. The assignment of performance shaping factors to generic errors by experts proved to be valuable to qualitative modeling. Performance shaping factors and error types from such detailed approaches can be used to structure error reporting schemes. In a recent NASA Advanced Human Support Technology grant FRANCIE was refined, and two new taxonomies for use on space missions were developed. The development, sharing, and use of error taxonomies, and the refinement of approaches for increased fidelity of qualitative modeling is offered as a means to help direct useful data collection strategies.« less

  14. Analysis of measured data of human body based on error correcting frequency

    NASA Astrophysics Data System (ADS)

    Jin, Aiyan; Peipei, Gao; Shang, Xiaomei

    2014-04-01

    Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.

  15. Accuracy and consistency of grass pollen identification by human analysts using electron micrographs of surface ornamentation1

    PubMed Central

    Mander, Luke; Baker, Sarah J.; Belcher, Claire M.; Haselhorst, Derek S.; Rodriguez, Jacklyn; Thorn, Jessica L.; Tiwari, Shivangi; Urrego, Dunia H.; Wesseln, Cassandra J.; Punyasena, Surangi W.

    2014-01-01

    • Premise of the study: Humans frequently identify pollen grains at a taxonomic rank above species. Grass pollen is a classic case of this situation, which has led to the development of computational methods for identifying grass pollen species. This paper aims to provide context for these computational methods by quantifying the accuracy and consistency of human identification. • Methods: We measured the ability of nine human analysts to identify 12 species of grass pollen using scanning electron microscopy images. These are the same images that were used in computational identifications. We have measured the coverage, accuracy, and consistency of each analyst, and investigated their ability to recognize duplicate images. • Results: Coverage ranged from 87.5% to 100%. Mean identification accuracy ranged from 46.67% to 87.5%. The identification consistency of each analyst ranged from 32.5% to 87.5%, and each of the nine analysts produced considerably different identification schemes. The proportion of duplicate image pairs that were missed ranged from 6.25% to 58.33%. • Discussion: The identification errors made by each analyst, which result in a decline in accuracy and consistency, are likely related to psychological factors such as the limited capacity of human memory, fatigue and boredom, recency effects, and positivity bias. PMID:25202649

  16. An assessment of the realism of digital human manikins used for simulation in ergonomics.

    PubMed

    Nérot, Agathe; Skalli, Wafa; Wang, Xuguang

    2015-01-01

    In this study, the accuracy of the joint centres of the manikins generated by RAMSIS and Human Builder (HB), two digital human modelling (DHM) systems widely used in industry for virtual ergonomics simulation, was investigated. Eighteen variously sized females and males were generated from external anthropometric dimensions and six joint centres (knee, hip and four spine joints) were compared with their anatomic locations obtained from the three-dimensional reconstructed bones from a low-dose X-ray system. Both RAMSIS and HB could correctly reproduce external anthropometric dimensions, while the estimation of internal joint centres location presented an average error of 27.6 mm for HB and 38.3 mm for RAMSIS. Differences between both manikins showed that a more realistic kinematic linkage led to better accuracy in joint location. This study opens the way to further research on the relationship between the external body geometry and internal skeleton in order to improve the realism of the internal skeleton of DHMs, especially for a biomechanical analysis requiring information of joint load and muscle force estimation. This study assessed two digital human modelling (DHM) systems widely used in industry for virtual ergonomics. Results support the need of a more realistic human modelling, especially for a biomechanical analysis and a standardisation of DHMs.

  17. WITHDRAWN: The Palaeocene Cerro Munro tonalite intrusion (Chubut Province, Argentina): A plutonic remnant of explosive volcanism?

    NASA Astrophysics Data System (ADS)

    Rodríguez, C.; Aragón, E.; Castro, A.; Pedreira, R.; Sánchez-Navas, A.; Díaz-Alvarado, J.; D´Eramo, F.; Pinotti, L.; Aguilera, Y.; Cavarozzi, C.; Demartis, M.; Hernando, I. R.; Fuentes, T.

    2017-10-01

    The publisher regrets that an error occurred which led to the premature publication of this paper. This error bears no reflection on the article or its authors. The publisher apologizes to the authors and the readers for this unfortunate error in Journal of South American Earth Sciences, 78C (2017) 30 - 60, http://dx.doi.org/10.1016/j.jsames.2017.06.002. The full Elsevier Policy on Article Withdrawal can be found at https://www.elsevier.com/about/our-business/policies/article-withdrawal

  18. Tailoring a Human Reliability Analysis to Your Industry Needs

    NASA Technical Reports Server (NTRS)

    DeMott, D. L.

    2016-01-01

    Companies at risk of accidents caused by human error that result in catastrophic consequences include: airline industry mishaps, medical malpractice, medication mistakes, aerospace failures, major oil spills, transportation mishaps, power production failures and manufacturing facility incidents. Human Reliability Assessment (HRA) is used to analyze the inherent risk of human behavior or actions introducing errors into the operation of a system or process. These assessments can be used to identify where errors are most likely to arise and the potential risks involved if they do occur. Using the basic concepts of HRA, an evolving group of methodologies are used to meet various industry needs. Determining which methodology or combination of techniques will provide a quality human reliability assessment is a key element to developing effective strategies for understanding and dealing with risks caused by human errors. There are a number of concerns and difficulties in "tailoring" a Human Reliability Assessment (HRA) for different industries. Although a variety of HRA methodologies are available to analyze human error events, determining the most appropriate tools to provide the most useful results can depend on industry specific cultures and requirements. Methodology selection may be based on a variety of factors that include: 1) how people act and react in different industries, 2) expectations based on industry standards, 3) factors that influence how the human errors could occur such as tasks, tools, environment, workplace, support, training and procedure, 4) type and availability of data, 5) how the industry views risk & reliability, and 6) types of emergencies, contingencies and routine tasks. Other considerations for methodology selection should be based on what information is needed from the assessment. If the principal concern is determination of the primary risk factors contributing to the potential human error, a more detailed analysis method may be employed versus a requirement to provide a numerical value as part of a probabilistic risk assessment. Industries involved with humans operating large equipment or transport systems (ex. railroads or airlines) would have more need to address the man machine interface than medical workers administering medications. Human error occurs in every industry; in most cases the consequences are relatively benign and occasionally beneficial. In cases where the results can have disastrous consequences, the use of Human Reliability techniques to identify and classify the risk of human errors allows a company more opportunities to mitigate or eliminate these types of risks and prevent costly tragedies.

  19. Using APEX to Model Anticipated Human Error: Analysis of a GPS Navigational Aid

    NASA Technical Reports Server (NTRS)

    VanSelst, Mark; Freed, Michael; Shefto, Michael (Technical Monitor)

    1997-01-01

    The interface development process can be dramatically improved by predicting design facilitated human error at an early stage in the design process. The approach we advocate is to SIMULATE the behavior of a human agent carrying out tasks with a well-specified user interface, ANALYZE the simulation for instances of human error, and then REFINE the interface or protocol to minimize predicted error. This approach, incorporated into the APEX modeling architecture, differs from past approaches to human simulation in Its emphasis on error rather than e.g. learning rate or speed of response. The APEX model consists of two major components: (1) a powerful action selection component capable of simulating behavior in complex, multiple-task environments; and (2) a resource architecture which constrains cognitive, perceptual, and motor capabilities to within empirically demonstrated limits. The model mimics human errors arising from interactions between limited human resources and elements of the computer interface whose design falls to anticipate those limits. We analyze the design of a hand-held Global Positioning System (GPS) device used for radical and navigational decisions in small yacht recalls. The analysis demonstrates how human system modeling can be an effective design aid, helping to accelerate the process of refining a product (or procedure).

  20. Protocol for the PINCER trial: a cluster randomised trial comparing the effectiveness of a pharmacist-led IT-based intervention with simple feedback in reducing rates of clinically important errors in medicines management in general practices

    PubMed Central

    Avery, Anthony J; Rodgers, Sarah; Cantrill, Judith A; Armstrong, Sarah; Elliott, Rachel; Howard, Rachel; Kendrick, Denise; Morris, Caroline J; Murray, Scott A; Prescott, Robin J; Cresswell, Kathrin; Sheikh, Aziz

    2009-01-01

    Background Medication errors are an important cause of morbidity and mortality in primary care. The aims of this study are to determine the effectiveness, cost effectiveness and acceptability of a pharmacist-led information-technology-based complex intervention compared with simple feedback in reducing proportions of patients at risk from potentially hazardous prescribing and medicines management in general (family) practice. Methods Research subject group: "At-risk" patients registered with computerised general practices in two geographical regions in England. Design: Parallel group pragmatic cluster randomised trial. Interventions: Practices will be randomised to either: (i) Computer-generated feedback; or (ii) Pharmacist-led intervention comprising of computer-generated feedback, educational outreach and dedicated support. Primary outcome measures: The proportion of patients in each practice at six and 12 months post intervention: - with a computer-recorded history of peptic ulcer being prescribed non-selective non-steroidal anti-inflammatory drugs - with a computer-recorded diagnosis of asthma being prescribed beta-blockers - aged 75 years and older receiving long-term prescriptions for angiotensin converting enzyme inhibitors or loop diuretics without a recorded assessment of renal function and electrolytes in the preceding 15 months. Secondary outcome measures; These relate to a number of other examples of potentially hazardous prescribing and medicines management. Economic analysis: An economic evaluation will be done of the cost per error avoided, from the perspective of the UK National Health Service (NHS), comparing the pharmacist-led intervention with simple feedback. Qualitative analysis: A qualitative study will be conducted to explore the views and experiences of health care professionals and NHS managers concerning the interventions, and investigate possible reasons why the interventions prove effective, or conversely prove ineffective. Sample size: 34 practices in each of the two treatment arms would provide at least 80% power (two-tailed alpha of 0.05) to demonstrate a 50% reduction in error rates for each of the three primary outcome measures in the pharmacist-led intervention arm compared with a 11% reduction in the simple feedback arm. Discussion At the time of submission of this article, 72 general practices have been recruited (36 in each arm of the trial) and the interventions have been delivered. Analysis has not yet been undertaken. Trial registration Current controlled trials ISRCTN21785299 PMID:19409095

  1. Protocol for the PINCER trial: a cluster randomised trial comparing the effectiveness of a pharmacist-led IT-based intervention with simple feedback in reducing rates of clinically important errors in medicines management in general practices.

    PubMed

    Avery, Anthony J; Rodgers, Sarah; Cantrill, Judith A; Armstrong, Sarah; Elliott, Rachel; Howard, Rachel; Kendrick, Denise; Morris, Caroline J; Murray, Scott A; Prescott, Robin J; Cresswell, Kathrin; Sheikh, Aziz

    2009-05-01

    Medication errors are an important cause of morbidity and mortality in primary care. The aims of this study are to determine the effectiveness, cost effectiveness and acceptability of a pharmacist-led information-technology-based complex intervention compared with simple feedback in reducing proportions of patients at risk from potentially hazardous prescribing and medicines management in general (family) practice. RESEARCH SUBJECT GROUP: "At-risk" patients registered with computerised general practices in two geographical regions in England. Parallel group pragmatic cluster randomised trial. Practices will be randomised to either: (i) Computer-generated feedback; or (ii) Pharmacist-led intervention comprising of computer-generated feedback, educational outreach and dedicated support. The proportion of patients in each practice at six and 12 months post intervention: - with a computer-recorded history of peptic ulcer being prescribed non-selective non-steroidal anti-inflammatory drugs; - with a computer-recorded diagnosis of asthma being prescribed beta-blockers; - aged 75 years and older receiving long-term prescriptions for angiotensin converting enzyme inhibitors or loop diuretics without a recorded assessment of renal function and electrolytes in the preceding 15 months. SECONDARY OUTCOME MEASURES; These relate to a number of other examples of potentially hazardous prescribing and medicines management. An economic evaluation will be done of the cost per error avoided, from the perspective of the UK National Health Service (NHS), comparing the pharmacist-led intervention with simple feedback. QUALITATIVE ANALYSIS: A qualitative study will be conducted to explore the views and experiences of health care professionals and NHS managers concerning the interventions, and investigate possible reasons why the interventions prove effective, or conversely prove ineffective. 34 practices in each of the two treatment arms would provide at least 80% power (two-tailed alpha of 0.05) to demonstrate a 50% reduction in error rates for each of the three primary outcome measures in the pharmacist-led intervention arm compared with a 11% reduction in the simple feedback arm. At the time of submission of this article, 72 general practices have been recruited (36 in each arm of the trial) and the interventions have been delivered. Analysis has not yet been undertaken.

  2. Position sense at the human forearm in the horizontal plane during loading and vibration of elbow muscles

    PubMed Central

    Ansems, G E; Allen, T J; Proske, U

    2006-01-01

    When blindfolded subjects match the position of their forearms in the vertical plane they rely on signals coming from the periphery as well as from the central motor command. The command signal provides a positional cue from the accompanying effort sensation required to hold the arm against gravity. Here we have asked, does a centrally generated effort signal contribute to position sense in the horizontal plane, where gravity cannot play a role? Blindfolded subjects were required to match forearm position for the unloaded arm and when flexors or extensors were bearing 10%, 25% or 40% of maximum loads. Before each match the reference arm was conditioned by contracting elbow muscles while the arm was held flexed or extended. For the unloaded arm conditioning led to a consistent pattern of errors which was attributed to signals from flexor and extensor muscle spindles. When elbow muscles were loaded the errors from conditioning converged, presumably because the spindles had become coactivated through the fusimotor system during the load-bearing contraction. However, this convergence was seen only when subjects supported a static load. When they moved the load differences in errors from conditioning persisted. Muscle vibration during load bearing or moving a load did not alter the distribution of errors. It is concluded that for position sense of an unloaded arm in the horizontal plane the brain relies on signals from muscle spindles. When the arm is loaded, an additional signal of central origin contributes, but only if the load is moved. PMID:16873408

  3. Henry Friesen Award Lecture. Work, the clinician-scientist and human biochemical genetics.

    PubMed

    Scriver, C R

    2001-08-01

    The pursuit of human biochemical genetics has allowed us to understand better how the person with the (genetic) disease differs from the disease the person has and to develop the concept that genetics belongs in all aspects of health care. It is a perspective that comes quite readily to the clinician-scientist, and the restoration of that "species" in the era of functional genomics is strongly recommended. Garrod, the initial founder of human "biochemical genetics" belonged to the clinician-scientist community. Archibald Edward Garrod introduced a paradigm, new for its day, in medicine: biochemistry is dynamic and different from the static nature of organic chemistry. It led him to think about metabolic pathways and to recognize that variation in Mendelian heredity could explain an "inborn error of metabolism." At the time, Garrod had no idea about the nature of a gene. Genes are now well understood; genomes are being described for one organism after another (including Homo sapiens) and it is understood that genomes "speak biochemistry (not phenotype)." Accordingly, in the era of genomics, biochemistry and physiology become the bases of functional genomics, and it is possible to appreciate why "nothing in biology makes sense without evolution" (and nothing in medicine will make sense without biology). Mendelian, biochemical and molecular genetics together have revealed what lies behind the 4 canonical inborn errors described by Garrod (albinisn, alkaptonuria, cystinuria and pentosuria). Both older and newer ideas in genetics, new tools for applying them (and renewed respect for the clinician-scientist) will enhance our understanding of the human biological variation that accounts for variant states of health and overt disease. A so-called monogenic phenotype (phenylketonuria) is used to illustrate, in some detail, that all disease phenotypes are, in one way or another, likely to be complex in nature. What can be known and what ought to be done, with knowledge about human genetics, to benefit individuals, families and communities (society), is both opportunity and challenge.

  4. Exploring human error in military aviation flight safety events using post-incident classification systems.

    PubMed

    Hooper, Brionny J; O'Hare, David P A

    2013-08-01

    Human error classification systems theoretically allow researchers to analyze postaccident data in an objective and consistent manner. The Human Factors Analysis and Classification System (HFACS) framework is one such practical analysis tool that has been widely used to classify human error in aviation. The Cognitive Error Taxonomy (CET) is another. It has been postulated that the focus on interrelationships within HFACS can facilitate the identification of the underlying causes of pilot error. The CET provides increased granularity at the level of unsafe acts. The aim was to analyze the influence of factors at higher organizational levels on the unsafe acts of front-line operators and to compare the errors of fixed-wing and rotary-wing operations. This study analyzed 288 aircraft incidents involving human error from an Australasian military organization occurring between 2001 and 2008. Action errors accounted for almost twice (44%) the proportion of rotary wing compared to fixed wing (23%) incidents. Both classificatory systems showed significant relationships between precursor factors such as the physical environment, mental and physiological states, crew resource management, training and personal readiness, and skill-based, but not decision-based, acts. The CET analysis showed different predisposing factors for different aspects of skill-based behaviors. Skill-based errors in military operations are more prevalent in rotary wing incidents and are related to higher level supervisory processes in the organization. The Cognitive Error Taxonomy provides increased granularity to HFACS analyses of unsafe acts.

  5. Phase Error Correction in Time-Averaged 3D Phase Contrast Magnetic Resonance Imaging of the Cerebral Vasculature

    PubMed Central

    MacDonald, M. Ethan; Forkert, Nils D.; Pike, G. Bruce; Frayne, Richard

    2016-01-01

    Purpose Volume flow rate (VFR) measurements based on phase contrast (PC)-magnetic resonance (MR) imaging datasets have spatially varying bias due to eddy current induced phase errors. The purpose of this study was to assess the impact of phase errors in time averaged PC-MR imaging of the cerebral vasculature and explore the effects of three common correction schemes (local bias correction (LBC), local polynomial correction (LPC), and whole brain polynomial correction (WBPC)). Methods Measurements of the eddy current induced phase error from a static phantom were first obtained. In thirty healthy human subjects, the methods were then assessed in background tissue to determine if local phase offsets could be removed. Finally, the techniques were used to correct VFR measurements in cerebral vessels and compared statistically. Results In the phantom, phase error was measured to be <2.1 ml/s per pixel and the bias was reduced with the correction schemes. In background tissue, the bias was significantly reduced, by 65.6% (LBC), 58.4% (LPC) and 47.7% (WBPC) (p < 0.001 across all schemes). Correction did not lead to significantly different VFR measurements in the vessels (p = 0.997). In the vessel measurements, the three correction schemes led to flow measurement differences of -0.04 ± 0.05 ml/s, 0.09 ± 0.16 ml/s, and -0.02 ± 0.06 ml/s. Although there was an improvement in background measurements with correction, there was no statistical difference between the three correction schemes (p = 0.242 in background and p = 0.738 in vessels). Conclusions While eddy current induced phase errors can vary between hardware and sequence configurations, our results showed that the impact is small in a typical brain PC-MR protocol and does not have a significant effect on VFR measurements in cerebral vessels. PMID:26910600

  6. The application of SHERPA (Systematic Human Error Reduction and Prediction Approach) in the development of compensatory cognitive rehabilitation strategies for stroke patients with left and right brain damage.

    PubMed

    Hughes, Charmayne M L; Baber, Chris; Bienkiewicz, Marta; Worthington, Andrew; Hazell, Alexa; Hermsdörfer, Joachim

    2015-01-01

    Approximately 33% of stroke patients have difficulty performing activities of daily living, often committing errors during the planning and execution of such activities. The objective of this study was to evaluate the ability of the human error identification (HEI) technique SHERPA (Systematic Human Error Reduction and Prediction Approach) to predict errors during the performance of daily activities in stroke patients with left and right hemisphere lesions. Using SHERPA we successfully predicted 36 of the 38 observed errors, with analysis indicating that the proportion of predicted and observed errors was similar for all sub-tasks and severity levels. HEI results were used to develop compensatory cognitive strategies that clinicians could employ to reduce or prevent errors from occurring. This study provides evidence for the reliability and validity of SHERPA in the design of cognitive rehabilitation strategies in stroke populations.

  7. An Analysis of U.S. Army Fratricide Incidents during the Global War on Terror (11 September 2001 to 31 March 2008)

    DTIC Science & Technology

    2010-03-15

    Swiss cheese model of human error causation. ................................................................... 3  2. Results for the classification of...based on Reason’s “ Swiss cheese ” model of human error (1990). Figure 1 describes how an accident is likely to occur when all of the errors, or “holes...align. A detailed description of HFACS can be found in Wiegmann and Shappell (2003). Figure 1. The Swiss cheese model of human error

  8. A Quality Improvement Project to Decrease Human Milk Errors in the NICU.

    PubMed

    Oza-Frank, Reena; Kachoria, Rashmi; Dail, James; Green, Jasmine; Walls, Krista; McClead, Richard E

    2017-02-01

    Ensuring safe human milk in the NICU is a complex process with many potential points for error, of which one of the most serious is administration of the wrong milk to the wrong infant. Our objective was to describe a quality improvement initiative that was associated with a reduction in human milk administration errors identified over a 6-year period in a typical, large NICU setting. We employed a quasi-experimental time series quality improvement initiative by using tools from the model for improvement, Six Sigma methodology, and evidence-based interventions. Scanned errors were identified from the human milk barcode medication administration system. Scanned errors of interest were wrong-milk-to-wrong-infant, expired-milk, or preparation errors. The scanned error rate and the impact of additional improvement interventions from 2009 to 2015 were monitored by using statistical process control charts. From 2009 to 2015, the total number of errors scanned declined from 97.1 per 1000 bottles to 10.8. Specifically, the number of expired milk error scans declined from 84.0 per 1000 bottles to 8.9. The number of preparation errors (4.8 per 1000 bottles to 2.2) and wrong-milk-to-wrong-infant errors scanned (8.3 per 1000 bottles to 2.0) also declined. By reducing the number of errors scanned, the number of opportunities for errors also decreased. Interventions that likely had the greatest impact on reducing the number of scanned errors included installation of bedside (versus centralized) scanners and dedicated staff to handle milk. Copyright © 2017 by the American Academy of Pediatrics.

  9. Human errors and measurement uncertainty

    NASA Astrophysics Data System (ADS)

    Kuselman, Ilya; Pennecchi, Francesca

    2015-04-01

    Evaluating the residual risk of human errors in a measurement and testing laboratory, remaining after the error reduction by the laboratory quality system, and quantifying the consequences of this risk for the quality of the measurement/test results are discussed based on expert judgments and Monte Carlo simulations. A procedure for evaluation of the contribution of the residual risk to the measurement uncertainty budget is proposed. Examples are provided using earlier published sets of expert judgments on human errors in pH measurement of groundwater, elemental analysis of geological samples by inductively coupled plasma mass spectrometry, and multi-residue analysis of pesticides in fruits and vegetables. The human error contribution to the measurement uncertainty budget in the examples was not negligible, yet also not dominant. This was assessed as a good risk management result.

  10. Human factors evaluation of remote afterloading brachytherapy: Human error and critical tasks in remote afterloading brachytherapy and approaches for improved system performance. Volume 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Callan, J.R.; Kelly, R.T.; Quinn, M.L.

    1995-05-01

    Remote Afterloading Brachytherapy (RAB) is a medical process used in the treatment of cancer. RAB uses a computer-controlled device to remotely insert and remove radioactive sources close to a target (or tumor) in the body. Some RAB problems affecting the radiation dose to the patient have been reported and attributed to human error. To determine the root cause of human error in the RAB system, a human factors team visited 23 RAB treatment sites in the US The team observed RAB treatment planning and delivery, interviewed RAB personnel, and performed walk-throughs, during which staff demonstrated the procedures and practices usedmore » in performing RAB tasks. Factors leading to human error in the RAB system were identified. The impact of those factors on the performance of RAB was then evaluated and prioritized in terms of safety significance. Finally, the project identified and evaluated alternative approaches for resolving the safety significant problems related to human error.« less

  11. Human error analysis of commercial aviation accidents using the human factors analysis and classification system (HFACS)

    DOT National Transportation Integrated Search

    2001-02-01

    The Human Factors Analysis and Classification System (HFACS) is a general human error framework : originally developed and tested within the U.S. military as a tool for investigating and analyzing the human : causes of aviation accidents. Based upon ...

  12. Multi-LED parallel transmission for long distance underwater VLC system with one SPAD receiver

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Yu, Hong-Yi; Zhu, Yi-Jun; Wang, Tao; Ji, Ya-Wei

    2018-03-01

    In this paper, a multiple light emitting diode (LED) chips parallel transmission (Multi-LED-PT) scheme for underwater visible light communication system with one photon-counting single photon avalanche diode (SPAD) receiver is proposed. As the lamp always consists of multi-LED chips, the data rate could be improved when we drive these multi-LED chips parallel by using the interleaver-division-multiplexing technique. For each chip, the on-off-keying modulation is used to reduce the influence of clipping. Then a serial successive interference cancellation detection algorithm based on ideal Poisson photon-counting channel by the SPAD is proposed. Finally, compared to the SPAD-based direct current-biased optical orthogonal frequency division multiplexing system, the proposed Multi-LED-PT system could improve the error-rate performance and anti-nonlinearity performance significantly under the effects of absorption, scattering and weak turbulence-induced channel fading together.

  13. Indoor high precision three-dimensional positioning system based on visible light communication using modified genetic algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Hao; Guan, Weipeng; Li, Simin; Wu, Yuxiang

    2018-04-01

    To improve the precision of indoor positioning and actualize three-dimensional positioning, a reversed indoor positioning system based on visible light communication (VLC) using genetic algorithm (GA) is proposed. In order to solve the problem of interference between signal sources, CDMA modulation is used. Each light-emitting diode (LED) in the system broadcasts a unique identity (ID) code using CDMA modulation. Receiver receives mixed signal from every LED reference point, by the orthogonality of spreading code in CDMA modulation, ID information and intensity attenuation information from every LED can be obtained. According to positioning principle of received signal strength (RSS), the coordinate of the receiver can be determined. Due to system noise and imperfection of device utilized in the system, distance between receiver and transmitters will deviate from the real value resulting in positioning error. By introducing error correction factors to global parallel search of genetic algorithm, coordinates of the receiver in three-dimensional space can be determined precisely. Both simulation results and experimental results show that in practical application scenarios, the proposed positioning system can realize high precision positioning service.

  14. Design of an Oximeter Based on LED-LED Configuration and FPGA Technology

    PubMed Central

    Stojanovic, Radovan; Karadaglic, Dejan

    2013-01-01

    A fully digital photoplethysmographic (PPG) sensor and actuator has been developed. The sensing circuit uses one Light Emitting Diode (LED) for emitting light into human tissue and one LED for detecting the reflectance light from human tissue. A Field Programmable Gate Array (FPGA) is used to control the LEDs and determine the PPG and Blood Oxygen Saturation (SpO2). The configurations with two LEDs and four LEDs are developed for measuring PPG signal and Blood Oxygen Saturation (SpO2). N-LEDs configuration is proposed for multichannel SpO2 measurements. The approach resulted in better spectral sensitivity, increased and adjustable resolution, reduced noise, small size, low cost and low power consumption. PMID:23291575

  15. Ethical Considerations on Disclosure When Medical Error Is Discovered During Medicolegal Death Investigation.

    PubMed

    Wolf, Dwayne A; Drake, Stacy A; Snow, Francine K

    2017-12-01

    In the course of fulfilling their statutory role, physicians performing medicolegal investigations may recognize clinical colleagues' medical errors. If the error is found to have led directly to the patient's death (missed diagnosis or incorrect diagnosis, for example), then the forensic pathologist has a professional responsibility to include the information in the autopsy report and make sure that the family is appropriately informed. When the error is significant but did not lead directly to the patient's demise, ethical questions may arise regarding the obligations of the medical examiner to disclose the error to the clinicians or to the family. This case depicts the discovery of medical error likely unrelated to the cause of death and describes one possible ethical approach to disclosure derived from an ethical reasoning model addressing ethical principles of respect for persons/autonomy, beneficence, nonmaleficence, and justice.

  16. Compound Stimulus Presentation Does Not Deepen Extinction in Human Causal Learning

    PubMed Central

    Griffiths, Oren; Holmes, Nathan; Westbrook, R. Fred

    2017-01-01

    Models of associative learning have proposed that cue-outcome learning critically depends on the degree of prediction error encountered during training. Two experiments examined the role of error-driven extinction learning in a human causal learning task. Target cues underwent extinction in the presence of additional cues, which differed in the degree to which they predicted the outcome, thereby manipulating outcome expectancy and, in the absence of any change in reinforcement, prediction error. These prediction error manipulations have each been shown to modulate extinction learning in aversive conditioning studies. While both manipulations resulted in increased prediction error during training, neither enhanced extinction in the present human learning task (one manipulation resulted in less extinction at test). The results are discussed with reference to the types of associations that are regulated by prediction error, the types of error terms involved in their regulation, and how these interact with parameters involved in training. PMID:28232809

  17. Human errors and occupational injuries of older female workers in the residential healthcare facilities for the elderly.

    PubMed

    Kim, Jun Sik; Jeong, Byung Yong

    2018-05-03

    The study aimed to describe the characteristics of occupational injuries of female workers in the residential healthcare facilities for the elderly, and analyze human errors as causes of accidents. From the national industrial accident compensation data, 506 female injuries were analyzed by age and occupation. The results showed that medical service worker was the most prevalent (54.1%), followed by social welfare worker (20.4%). Among injuries, 55.7% were <1 year of work experience, and 37.9% were ≥60 years old. Slips/falls were the most common type of accident (42.7%), and proportion of injured by slips/falls increases with age. Among human errors, action errors were the primary reasons, followed by perception errors, and cognition errors. Besides, the ratios of injuries by perception errors and action errors increase with age, respectively. The findings of this study suggest that there is a need to design workplaces that accommodate the characteristics of older female workers.

  18. [Risk and risk management in aviation].

    PubMed

    Müller, Manfred

    2004-10-01

    RISK MANAGEMENT: The large proportion of human errors in aviation accidents suggested the solution--at first sight brilliant--to replace the fallible human being by an "infallible" digitally-operating computer. However, even after the introduction of the so-called HITEC-airplanes, the factor human error still accounts for 75% of all accidents. Thus, if the computer is ruled out as the ultimate safety system, how else can complex operations involving quick and difficult decisions be controlled? OPTIMIZED TEAM INTERACTION/PARALLEL CONNECTION OF THOUGHT MACHINES: Since a single person is always "highly error-prone", support and control have to be guaranteed by a second person. The independent work of mind results in a safety network that more efficiently cushions human errors. NON-PUNITIVE ERROR MANAGEMENT: To be able to tackle the actual problems, the open discussion of intervened errors must not be endangered by the threat of punishment. It has been shown in the past that progress is primarily achieved by investigating and following up mistakes, failures and catastrophes shortly after they happened. HUMAN FACTOR RESEARCH PROJECT: A comprehensive survey showed the following result: By far the most frequent safety-critical situation (37.8% of all events) consists of the following combination of risk factors: 1. A complication develops. 2. In this situation of increased stress a human error occurs. 3. The negative effects of the error cannot be corrected or eased because there are deficiencies in team interaction on the flight deck. This means, for example, that a negative social climate has the effect of a "turbocharger" when a human error occurs. It needs to be pointed out that a negative social climate is not identical with a dispute. In many cases the working climate is burdened without the responsible person even noticing it: A first negative impression, too much or too little respect, contempt, misunderstandings, not expressing unclear concern, etc. can considerably reduce the efficiency of a team.

  19. Competition between learned reward and error outcome predictions in anterior cingulate cortex.

    PubMed

    Alexander, William H; Brown, Joshua W

    2010-02-15

    The anterior cingulate cortex (ACC) is implicated in performance monitoring and cognitive control. Non-human primate studies of ACC show prominent reward signals, but these are elusive in human studies, which instead show mainly conflict and error effects. Here we demonstrate distinct appetitive and aversive activity in human ACC. The error likelihood hypothesis suggests that ACC activity increases in proportion to the likelihood of an error, and ACC is also sensitive to the consequence magnitude of the predicted error. Previous work further showed that error likelihood effects reach a ceiling as the potential consequences of an error increase, possibly due to reductions in the average reward. We explored this issue by independently manipulating reward magnitude of task responses and error likelihood while controlling for potential error consequences in an Incentive Change Signal Task. The fMRI results ruled out a modulatory effect of expected reward on error likelihood effects in favor of a competition effect between expected reward and error likelihood. Dynamic causal modeling showed that error likelihood and expected reward signals are intrinsic to the ACC rather than received from elsewhere. These findings agree with interpretations of ACC activity as signaling both perceptions of risk and predicted reward. Copyright 2009 Elsevier Inc. All rights reserved.

  20. Metameric MIMO-OOK transmission scheme using multiple RGB LEDs.

    PubMed

    Bui, Thai-Chien; Cusani, Roberto; Scarano, Gaetano; Biagi, Mauro

    2018-05-28

    In this work, we propose a novel visible light communication (VLC) scheme utilizing multiple different red green and blue triplets each with a different emission spectrum of red, green and blue for mitigating the effect of interference due to different colors using spatial multiplexing. On-off keying modulation is considered and its effect on light emission in terms of flickering, dimming and color rendering is discussed so as to demonstrate how metameric properties have been considered. At the receiver, multiple photodiodes with color filter-tuned on each transmit light emitting diode (LED) are employed. Three different detection mechanisms of color zero forcing, minimum mean square error estimation and minimum mean square error equalization are then proposed. The system performance of the proposed scheme is evaluated both with computer simulations and tests with an Arduino board implementation.

  1. [Development and test of a wheat chlorophyll, nitrogen and water content meter].

    PubMed

    Yu, Bo; Sun, Ming; Han, Shu-Qing; Xia, Jin-Wen

    2011-08-01

    A portable meter was developed which can detect chlorophyll, nitrogen and moisture content of wheat leaf simultaneously, and can supply enough data for guiding fertilization and irrigation. This meter is composed of light path and electronic circuit. And this meter uses 660, 940 and 1450 nm LED together with narrow band filters as the active light source. The hardware circuit consists of micro-controller, LED drive circuit, detector, communication circuit, keyboard and LCD circuit. The meter was tested in the field and performed well with good repeatability and accuracy. The relative errors of chlorophyll and nitrogen test were about 10%, relative error for water content was 4%. The coefficients of variation of the three indices were all below 1.5%. All of these prove that the meter can be applied under the field condition to guide the wheat production.

  2. Temporal uncertainty analysis of human errors based on interrelationships among multiple factors: a case of Minuteman III missile accident.

    PubMed

    Rong, Hao; Tian, Jin; Zhao, Tingdi

    2016-01-01

    In traditional approaches of human reliability assessment (HRA), the definition of the error producing conditions (EPCs) and the supporting guidance are such that some of the conditions (especially organizational or managerial conditions) can hardly be included, and thus the analysis is burdened with incomprehensiveness without reflecting the temporal trend of human reliability. A method based on system dynamics (SD), which highlights interrelationships among technical and organizational aspects that may contribute to human errors, is presented to facilitate quantitatively estimating the human error probability (HEP) and its related variables changing over time in a long period. Taking the Minuteman III missile accident in 2008 as a case, the proposed HRA method is applied to assess HEP during missile operations over 50 years by analyzing the interactions among the variables involved in human-related risks; also the critical factors are determined in terms of impact that the variables have on risks in different time periods. It is indicated that both technical and organizational aspects should be focused on to minimize human errors in a long run. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  3. Systematic analysis of video data from different human-robot interaction studies: a categorization of social signals during error situations.

    PubMed

    Giuliani, Manuel; Mirnig, Nicole; Stollnberger, Gerald; Stadler, Susanne; Buchner, Roland; Tscheligi, Manfred

    2015-01-01

    Human-robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human-robot interaction experiments. For that, we analyzed 201 videos of five human-robot interaction user studies with varying tasks from four independent projects. The analysis shows that there are two types of error situations: social norm violations and technical failures. Social norm violations are situations in which the robot does not adhere to the underlying social script of the interaction. Technical failures are caused by technical shortcomings of the robot. The results of the video analysis show that the study participants use many head movements and very few gestures, but they often smile, when in an error situation with the robot. Another result is that the participants sometimes stop moving at the beginning of error situations. We also found that the participants talked more in the case of social norm violations and less during technical failures. Finally, the participants use fewer non-verbal social signals (for example smiling, nodding, and head shaking), when they are interacting with the robot alone and no experimenter or other human is present. The results suggest that participants do not see the robot as a social interaction partner with comparable communication skills. Our findings have implications for builders and evaluators of human-robot interaction systems. The builders need to consider including modules for recognition and classification of head movements to the robot input channels. The evaluators need to make sure that the presence of an experimenter does not skew the results of their user studies.

  4. Analysis of Light Emitting Diode Technology for Aerospace Suitability in Human Space Flight Applications

    NASA Astrophysics Data System (ADS)

    Treichel, Todd H.

    Commercial space designers are required to manage space flight designs in accordance with parts selections made from qualified parts listings approved by Department of Defense and NASA agencies for reliability and safety. The research problem was a government and private aerospace industry problem involving how LEDs cannot replace existing fluorescent lighting in manned space flight vehicles until such technology meets DOD and NASA requirements for reliability and safety, and effects on astronaut cognition and health. The purpose of this quantitative experimental study was to determine to what extent commercial LEDs can suitably meet NASA requirements for manufacturer reliability, color reliability, robustness to environmental test requirements, and degradation effects from operational power, while providing comfortable ambient light free of eyestrain to astronauts in lieu of current fluorescent lighting. A fractional factorial experiment tested white and blue LEDs for NASA required space flight environmental stress testing and applied operating current. The second phase of the study used a randomized block design, to test human factor effects of LEDs and a qualified ISS fluorescent for retinal fatigue and eye strain. Eighteen human subjects were recruited from university student members of the American Institute of Aeronautics and Astronautics. Findings for Phase 1 testing showed that commercial LEDs met all DOD and NASA requirements for manufacturer reliability, color reliability, robustness to environmental requirements, and degradation effects from operational power. Findings showed statistical significance for LED color and operational power variables but degraded light output levels did not fall below the industry recognized <70%. Findings from Phase 2 human factors testing showed no statistically significant evidence that the NASA approved ISS fluorescent lights or blue or white LEDs caused fatigue, eye strain and/or headache, when study participants perform detailed tasks of reading and assembling mechanical parts for an extended period of two uninterrupted hours. However, human subjects self-reported that blue LEDs provided the most white light and the favored light source over the white LED and the ISS fluorescent as a sole artificial light source for space travel. According to NASA standards, findings from this study indicate that LEDs meet criteria for the NASA TRL 7 rating, as study findings showed that commercial LED manufacturers passed the rigorous testing standards of suitability for space flight environments and human factor effects. Recommendations for future research include further testing for space flight using the basis of this study for replication, but reduce study limitations by 1) testing human subjects exposure to LEDs in a simulated space capsule environment over several days, and 2) installing and testing LEDs in space modules being tested for human spaceflight.

  5. Human error analysis of commercial aviation accidents: application of the Human Factors Analysis and Classification system (HFACS).

    PubMed

    Wiegmann, D A; Shappell, S A

    2001-11-01

    The Human Factors Analysis and Classification System (HFACS) is a general human error framework originally developed and tested within the U.S. military as a tool for investigating and analyzing the human causes of aviation accidents. Based on Reason's (1990) model of latent and active failures, HFACS addresses human error at all levels of the system, including the condition of aircrew and organizational factors. The purpose of the present study was to assess the utility of the HFACS framework as an error analysis and classification tool outside the military. The HFACS framework was used to analyze human error data associated with aircrew-related commercial aviation accidents that occurred between January 1990 and December 1996 using database records maintained by the NTSB and the FAA. Investigators were able to reliably accommodate all the human causal factors associated with the commercial aviation accidents examined in this study using the HFACS system. In addition, the classification of data using HFACS highlighted several critical safety issues in need of intervention research. These results demonstrate that the HFACS framework can be a viable tool for use within the civil aviation arena. However, additional research is needed to examine its applicability to areas outside the flight deck, such as aircraft maintenance and air traffic control domains.

  6. To Err Is Human; To Structurally Prime from Errors Is Also Human

    ERIC Educational Resources Information Center

    Slevc, L. Robert; Ferreira, Victor S.

    2013-01-01

    Natural language contains disfluencies and errors. Do listeners simply discard information that was clearly produced in error, or can erroneous material persist to affect subsequent processing? Two experiments explored this question using a structural priming paradigm. Speakers described dative-eliciting pictures after hearing prime sentences that…

  7. Human factors analysis and classification system-HFACS.

    DOT National Transportation Integrated Search

    2000-02-01

    Human error has been implicated in 70 to 80% of all civil and military aviation accidents. Yet, most accident : reporting systems are not designed around any theoretical framework of human error. As a result, most : accident databases are not conduci...

  8. Broadband radiometric LED measurements

    NASA Astrophysics Data System (ADS)

    Eppeldauer, G. P.; Cooksey, C. C.; Yoon, H. W.; Hanssen, L. M.; Podobedov, V. B.; Vest, R. E.; Arp, U.; Miller, C. C.

    2016-09-01

    At present, broadband radiometric LED measurements with uniform and low-uncertainty results are not available. Currently, either complicated and expensive spectral radiometric measurements or broadband photometric LED measurements are used. The broadband photometric measurements are based on the CIE standardized V(λ) function, which cannot be used in the UV range and leads to large errors when blue or red LEDs are measured in its wings, where the realization is always poor. Reference irradiance meters with spectrally constant response and high-intensity LED irradiance sources were developed here to implement the previously suggested broadband radiometric LED measurement procedure [1, 2]. Using a detector with spectrally constant response, the broadband radiometric quantities of any LEDs or LED groups can be simply measured with low uncertainty without using any source standard. The spectral flatness of filtered-Si detectors and low-noise pyroelectric radiometers are compared. Examples are given for integrated irradiance measurement of UV and blue LED sources using the here introduced reference (standard) pyroelectric irradiance meters. For validation, the broadband measured integrated irradiance of several LED-365 sources were compared with the spectrally determined integrated irradiance derived from an FEL spectral irradiance lamp-standard. Integrated responsivity transfer from the reference irradiance meter to transfer standard and field UV irradiance meters is discussed.

  9. Technical approaches for measurement of human errors

    NASA Technical Reports Server (NTRS)

    Clement, W. F.; Heffley, R. K.; Jewell, W. F.; Mcruer, D. T.

    1980-01-01

    Human error is a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents. The technical details of a variety of proven approaches for the measurement of human errors in the context of the national airspace system are presented. Unobtrusive measurements suitable for cockpit operations and procedures in part of full mission simulation are emphasized. Procedure, system performance, and human operator centered measurements are discussed as they apply to the manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations.

  10. How oocytes try to get it right: spindle checkpoint control in meiosis.

    PubMed

    Touati, Sandra A; Wassmann, Katja

    2016-06-01

    The generation of a viable, diploid organism depends on the formation of haploid gametes, oocytes, and spermatocytes, with the correct number of chromosomes. Halving the genome requires the execution of two consecutive specialized cell divisions named meiosis I and II. Unfortunately, and in contrast to male meiosis, chromosome segregation in oocytes is error prone, with human oocytes being extraordinarily "meiotically challenged". Aneuploid oocytes, that are with the wrong number of chromosomes, give rise to aneuploid embryos when fertilized. In humans, most aneuploidies are lethal and result in spontaneous abortions. However, some trisomies survive to birth or even adulthood, such as the well-known trisomy 21, which gives rise to Down syndrome (Nagaoka et al. in Nat Rev Genet 13:493-504, 2012). A staggering 20-25 % of oocytes ready to be fertilized are aneuploid in humans. If this were not bad enough, there is an additional increase in meiotic missegregations as women get closer to menopause. A woman above 40 has a risk of more than 30 % of getting pregnant with a trisomic child. Worse still, in industrialized western societies, child birth is delayed, with women getting their first child later in life than ever. This trend has led to an increase of trisomic pregnancies by 70 % in the last 30 years (Nagaoka et al. in Nat Rev Genet 13:493-504, 2012; Schmidt et al. in Hum Reprod Update 18:29-43, 2012). To understand why errors occur so frequently during the meiotic divisions in oocytes, we review here the molecular mechanisms at works to control chromosome segregation during meiosis. An important mitotic control mechanism, namely the spindle assembly checkpoint or SAC, has been adapted to the special requirements of the meiotic divisions, and this review will focus on our current knowledge of SAC control in mammalian oocytes. Knowledge on how chromosome segregation is controlled in mammalian oocytes may help to identify risk factors important for questions related to human reproductive health.

  11. Historizing epistemology in psychology.

    PubMed

    Jovanović, Gordana

    2010-12-01

    The conflict between the psychometric methodological framework and the particularities of human experiences reported in psychotherapeutic context led Michael Schwarz to raise the question whether psychology is based on a methodological error. I take this conflict as a heuristic tool for the reconstruction of the early history of psychology, which bears witness to similar epistemological conflicts, though the dominant historiography of psychology has largely forgotten alternative conceptions and their valuable insights into complexities of psychic phenomena. In order to work against the historical amnesia in psychology I suggest to look at cultural-historical contexts which decisively shaped epistemological choices in psychology. Instead of keeping epistemology and history of psychology separate, which nurtures individualism and naturalism in psychology, I argue for historizing epistemology and for historical psychology. From such a historically reflected perspective psychology in contemporary world can be approached more critically.

  12. Two fatal tiger attacks in zoos.

    PubMed

    Tantius, Britta; Wittschieber, Daniel; Schmidt, Sven; Rothschild, Markus A; Banaschak, Sibylle

    2016-01-01

    Two captive tiger attacks are presented that took place in Cologne and Münster zoos. Both attacks occurred when the handlers, intent on cleaning the enclosures, entered whilst the tigers accidently retained access to the location, and thus defended their territory against the perceived intruders. Both victims suffered fatal neck injuries from the bites. At Münster, colleagues managed to lure the tiger away from its victim to enable treatment, whilst the Cologne zoo tiger had to be shot in order to allow access to be gained. Whilst it was judged that human error led to the deaths of the experienced zookeepers, the investigation in Münster was closed as no third party was found to be at fault, whereas the Cologne zoo director was initially charged with being negligent. These charges were subsequently dismissed as safety regulations were found to be up to date.

  13. The dynamics of error processing in the human brain as reflected by high-gamma activity in noninvasive and intracranial EEG.

    PubMed

    Völker, Martin; Fiederer, Lukas D J; Berberich, Sofie; Hammer, Jiří; Behncke, Joos; Kršek, Pavel; Tomášek, Martin; Marusič, Petr; Reinacher, Peter C; Coenen, Volker A; Helias, Moritz; Schulze-Bonhage, Andreas; Burgard, Wolfram; Ball, Tonio

    2018-06-01

    Error detection in motor behavior is a fundamental cognitive function heavily relying on local cortical information processing. Neural activity in the high-gamma frequency band (HGB) closely reflects such local cortical processing, but little is known about its role in error processing, particularly in the healthy human brain. Here we characterize the error-related response of the human brain based on data obtained with noninvasive EEG optimized for HGB mapping in 31 healthy subjects (15 females, 16 males), and additional intracranial EEG data from 9 epilepsy patients (4 females, 5 males). Our findings reveal a multiscale picture of the global and local dynamics of error-related HGB activity in the human brain. On the global level as reflected in the noninvasive EEG, the error-related response started with an early component dominated by anterior brain regions, followed by a shift to parietal regions, and a subsequent phase characterized by sustained parietal HGB activity. This phase lasted for more than 1 s after the error onset. On the local level reflected in the intracranial EEG, a cascade of both transient and sustained error-related responses involved an even more extended network, spanning beyond frontal and parietal regions to the insula and the hippocampus. HGB mapping appeared especially well suited to investigate late, sustained components of the error response, possibly linked to downstream functional stages such as error-related learning and behavioral adaptation. Our findings establish the basic spatio-temporal properties of HGB activity as a neural correlate of error processing, complementing traditional error-related potential studies. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  14. Development of an errorable car-following driver model

    NASA Astrophysics Data System (ADS)

    Yang, H.-H.; Peng, H.

    2010-06-01

    An errorable car-following driver model is presented in this paper. An errorable driver model is one that emulates human driver's functions and can generate both nominal (error-free), as well as devious (with error) behaviours. This model was developed for evaluation and design of active safety systems. The car-following data used for developing and validating the model were obtained from a large-scale naturalistic driving database. The stochastic car-following behaviour was first analysed and modelled as a random process. Three error-inducing behaviours were then introduced. First, human perceptual limitation was studied and implemented. Distraction due to non-driving tasks was then identified based on the statistical analysis of the driving data. Finally, time delay of human drivers was estimated through a recursive least-square identification process. By including these three error-inducing behaviours, rear-end collisions with the lead vehicle could occur. The simulated crash rate was found to be similar but somewhat higher than that reported in traffic statistics.

  15. Lost in Translation: the Case for Integrated Testing

    NASA Technical Reports Server (NTRS)

    Young, Aaron

    2017-01-01

    The building of a spacecraft is complex and often involves multiple suppliers and companies that have their own designs and processes. Standards have been developed across the industries to reduce the chances for critical flight errors at the system level, but the spacecraft is still vulnerable to the introduction of critical errors during integration of these systems. Critical errors can occur at any time during the process and in many cases, human reliability analysis (HRA) identifies human error as a risk driver. Most programs have a test plan in place that is intended to catch these errors, but it is not uncommon for schedule and cost stress to result in less testing than initially planned. Therefore, integrated testing, or "testing as you fly," is essential as a final check on the design and assembly to catch any errors prior to the mission. This presentation will outline the unique benefits of integrated testing by catching critical flight errors that can otherwise go undetected, discuss HRA methods that are used to identify opportunities for human error, lessons learned and challenges over ownership of testing will be discussed.

  16. Human factors in aircraft incidents - Results of a 7-year study (Andre Allard Memorial Lecture)

    NASA Technical Reports Server (NTRS)

    Billings, C. E.; Reynard, W. D.

    1984-01-01

    It is pointed out that nearly all fatal aircraft accidents are preventable, and that most such accidents are due to human error. The present discussion is concerned with the results of a seven-year study of the data collected by the NASA Aviation Safety Reporting System (ASRS). The Aviation Safety Reporting System was designed to stimulate as large a flow as possible of information regarding errors and operational problems in the conduct of air operations. It was implemented in April, 1976. In the following 7.5 years, 35,000 reports have been received from pilots, controllers, and the armed forces. Human errors are found in more than 80 percent of these reports. Attention is given to the types of events reported, possible causal factors in incidents, the relationship of incidents and accidents, and sources of error in the data. ASRS reports include sufficient detail to permit authorities to institute changes in the national aviation system designed to minimize the likelihood of human error, and to insulate the system against the effects of errors.

  17. Human factors in surgery: from Three Mile Island to the operating room.

    PubMed

    D'Addessi, Alessandro; Bongiovanni, Luca; Volpe, Andrea; Pinto, Francesco; Bassi, PierFrancesco

    2009-01-01

    Human factors is a definition that includes the science of understanding the properties of human capability, the application of this understanding to the design and development of systems and services, the art of ensuring their successful applications to a program. The field of human factors traces its origins to the Second World War, but Three Mile Island has been the best example of how groups of people react and make decisions under stress: this nuclear accident was exacerbated by wrong decisions made because the operators were overwhelmed with irrelevant, misleading or incorrect information. Errors and their nature are the same in all human activities. The predisposition for error is so intrinsic to human nature that scientifically it is best considered as inherently biologic. The causes of error in medical care may not be easily generalized. Surgery differs in important ways: most errors occur in the operating room and are technical in nature. Commonly, surgical error has been thought of as the consequence of lack of skill or ability, and is the result of thoughtless actions. Moreover the 'operating theatre' has a unique set of team dynamics: professionals from multiple disciplines are required to work in a closely coordinated fashion. This complex environment provides multiple opportunities for unclear communication, clashing motivations, errors arising not from technical incompetence but from poor interpersonal skills. Surgeons have to work closely with human factors specialists in future studies. By improving processes already in place in many operating rooms, safety will be enhanced and quality increased.

  18. The Human Factors Analysis and Classification System : HFACS : final report.

    DOT National Transportation Integrated Search

    2000-02-01

    Human error has been implicated in 70 to 80% of all civil and military aviation accidents. Yet, most accident reporting systems are not designed around any theoretical framework of human error. As a result, most accident databases are not conducive t...

  19. Differential reliance of chimpanzees and humans on automatic and deliberate control of motor actions.

    PubMed

    Kaneko, Takaaki; Tomonaga, Masaki

    2014-06-01

    Humans are often unaware of how they control their limb motor movements. People pay attention to their own motor movements only when their usual motor routines encounter errors. Yet little is known about the extent to which voluntary actions rely on automatic control and when automatic control shifts to deliberate control in nonhuman primates. In this study, we demonstrate that chimpanzees and humans showed similar limb motor adjustment in response to feedback error during reaching actions, whereas attentional allocation inferred from gaze behavior differed. We found that humans shifted attention to their own motor kinematics as errors were induced in motor trajectory feedback regardless of whether the errors actually disrupted their reaching their action goals. In contrast, chimpanzees shifted attention to motor execution only when errors actually interfered with their achieving a planned action goal. These results indicate that the species differed in their criteria for shifting from automatic to deliberate control of motor actions. It is widely accepted that sophisticated motor repertoires have evolved in humans. Our results suggest that the deliberate monitoring of one's own motor kinematics may have evolved in the human lineage. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Midair collisions - The accidents, the systems, and the Realpolitik

    NASA Technical Reports Server (NTRS)

    Wiener, E. L.

    1980-01-01

    Two midair collisions occurring in 1978 are described, and the air traffic control system and procedures in use at the time, human factors implications and political consequences of the accidents are examined. The first collision occurred in Memphis and involved a Falcon jet and a Cessna 150 in a situation in which the controllers handling each aircraft were not aware of the presence of the other aircraft until it was too late. The second occurred in San Diego four months later, when a Boeing 727 on a visual approach struck a Cessna 172 from the rear. Following the San Diego collision there arose a great deal of investigative activity, resulting in suggestions for tighter control on visual flight rules aircraft and the expansion of positive control airspace. These issues then led to a political battle involving general aviation, the FAA and the Congress. It is argued, however, that the collisions were in fact system-induced errors resulting from an air traffic control system which emphasizes airspace allocation and politics rather than the various human factors problems facing pilots and controllers.

  1. Preventable Medical Errors Driven Modeling of Medical Best Practice Guidance Systems.

    PubMed

    Ou, Andrew Y-Z; Jiang, Yu; Wu, Po-Liang; Sha, Lui; Berlin, Richard B

    2017-01-01

    In a medical environment such as Intensive Care Unit, there are many possible reasons to cause errors, and one important reason is the effect of human intellectual tasks. When designing an interactive healthcare system such as medical Cyber-Physical-Human Systems (CPHSystems), it is important to consider whether the system design can mitigate the errors caused by these tasks or not. In this paper, we first introduce five categories of generic intellectual tasks of humans, where tasks among each category may lead to potential medical errors. Then, we present an integrated modeling framework to model a medical CPHSystem and use UPPAAL as the foundation to integrate and verify the whole medical CPHSystem design models. With a verified and comprehensive model capturing the human intellectual tasks effects, we can design a more accurate and acceptable system. We use a cardiac arrest resuscitation guidance and navigation system (CAR-GNSystem) for such medical CPHSystem modeling. Experimental results show that the CPHSystem models help determine system design flaws and can mitigate the potential medical errors caused by the human intellectual tasks.

  2. Publisher Correction: Measuring progress from nationally determined contributions to mid-century strategies

    NASA Astrophysics Data System (ADS)

    Iyer, Gokul; Ledna, Catherine; Clarke, Leon; Edmonds, James; McJeon, Haewon; Kyle, Page; Williams, James H.

    2018-03-01

    In the version of this Article previously published, technical problems led to the wrong summary appearing on the homepage, and an incorrect Supplementary Information file being uploaded. Both errors have now been corrected.

  3. [Management of medication errors in general medical practice: Study in a pluriprofessionnal health care center].

    PubMed

    Pourrain, Laure; Serin, Michel; Dautriche, Anne; Jacquetin, Fréderic; Jarny, Christophe; Ballenecker, Isabelle; Bahous, Mickaël; Sgro, Catherine

    2018-06-07

    Medication errors are the most frequent medical care adverse events in France. Their management process used in hospital remains poorly applied in primary ambulatory care. The main objective of our study was to assess medication error management in general ambulatory practice. The secondary objectives were the characterization of the errors and the analysis of their root causes in order to implement corrective measures. The study was performed in a pluriprofessionnal health care house, applying the stages and tools validated by the French high health authority, that we previously adapted to ambulatory medical cares. During the 3 months study 4712 medical consultations were performed and we collected 64 medication errors. Most of affected patients were at the extreme ages of life (9,4 % before 9 years and 64 % after 70 years). Medication errors occurred at home in 39,1 % of cases, at pluriprofessionnal health care house (25,0 %) or at drugstore (17,2 %). They led to serious clinical consequences (classified as major, critical or catastrophic) in 17,2 % of cases. Drug induced adverse effects occurred in 5 patients, 3 of them needing hospitalization (1 patient recovered, 1 displayed sequelae and 1 died). In more than half of cases, the errors occurred at prescribing stage. The most frequent type of errors was the use of a wrong drug, different from that indicated for the patient (37,5 %) and poor treatment adherence (18,75 %). The systemic reported causes were a care process dysfunction (in coordination or procedure), the health care action context (patient home, not planned act, professional overwork), human factors such as patient and professional condition. The professional team adherence to the study was excellent. Our study demonstrates, for the first time in France, that medication errors management in ambulatory general medical care can be implemented in a pluriprofessionnal health care house with two conditions: the presence of a trained team coordinator, and the use of validated adapted and simple processes and tools. This study also shows that medications errors in general practice are specific of the care process organization. We identified vulnerable points, as transferring and communication between home and care facilities or conversely, medical coordination and involvement of the patient himself in his care. Copyright © 2018 Société française de pharmacologie et de thérapeutique. Published by Elsevier Masson SAS. All rights reserved.

  4. Effectiveness of Toyota process redesign in reducing thyroid gland fine-needle aspiration error.

    PubMed

    Raab, Stephen S; Grzybicki, Dana Marie; Sudilovsky, Daniel; Balassanian, Ronald; Janosky, Janine E; Vrbin, Colleen M

    2006-10-01

    Our objective was to determine whether the Toyota Production System process redesign resulted in diagnostic error reduction for patients who underwent cytologic evaluation of thyroid nodules. In this longitudinal, nonconcurrent cohort study, we compared the diagnostic error frequency of a thyroid aspiration service before and after implementation of error reduction initiatives consisting of adoption of a standardized diagnostic terminology scheme and an immediate interpretation service. A total of 2,424 patients underwent aspiration. Following terminology standardization, the false-negative rate decreased from 41.8% to 19.1% (P = .006), the specimen nondiagnostic rate increased from 5.8% to 19.8% (P < .001), and the sensitivity increased from 70.2% to 90.6% (P < .001). Cases with an immediate interpretation had a lower noninterpretable specimen rate than those without immediate interpretation (P < .001). Toyota process change led to significantly fewer diagnostic errors for patients who underwent thyroid fine-needle aspiration.

  5. Rectangular illumination using a secondary optics with cylindrical lens for LED street light.

    PubMed

    Chen, Hsi-Chao; Lin, Jun-Yu; Chiu, Hsuan-Yi

    2013-02-11

    The illumination pattern of an LED street light is required to have a rectangular distribution at a divergence-angle ratio of 7:3 for economical illumination. Hence, research supplying a secondary optics with two cylindrical lenses was different from free-form curvature for rectangular illumination. The analytical solution for curvatures with different ratio rectangles solved this detail by light tracing and boundary conditions. Similarities between the experiments and the simulation for a single LED and a 9-LED module were analyzed by Normalized Cross Correlation (NCC), and the error rate was studied by the Root Mean Square (RMS). The tolerance of position must be kept under ± 0.2 mm in the x, y and z directions to ensure that the relative illumination is over 99%.

  6. 71-Mbit/s ultraviolet-B LED communication link based on 8-QAM-OFDM modulation.

    PubMed

    Sun, Xiaobin; Zhang, Zhenyu; Chaaban, Anas; Ng, Tien Khee; Shen, Chao; Chen, Rui; Yan, Jianchang; Sun, Haiding; Li, Xiaohang; Wang, Junxi; Li, Jinmin; Alouini, Mohamed-Slim; Ooi, Boon S

    2017-09-18

    A demonstration of ultraviolet-B (UVB) communication link is implemented utilizing quadrature amplitude modulation (QAM) orthogonal frequency-division multiplexing (OFDM). The demonstration is based on a 294-nm UVB-light-emitting-diode (UVB-LED) with a full-width at half-maximum (FWHM) of 9 nm and light output power of 190 μW, at 7 V, with a special silica gel lens on top of it. A -3-dB bandwidth of 29 MHz was measured and a high-speed near-solar-blind communication link with a data rate of 71 Mbit/s was achieved using 8-QAM-OFDM at perfect alignment. 23.6 Mbit/s using 2-QAM-OFDM when the angle subtended by the pointing directions of the UVB-LED and photodetector (PD) is 12 degrees, thus establishing a diffuse-line-of-sight (LOS) link. The measured bit-error rate (BER) of 2.8 ×10 -4 and 2.4 ×10 -4 , respectively, are well below the forward error correction (FEC) criterion of 3.8 ×10 -3 . The demonstrated high data-rate OFDM-based UVB communication link paves the way for realizing high-speed non-line-of-sight free-space optical communications.

  7. Current pulse: can a production system reduce medical errors in health care?

    PubMed

    Printezis, Antonios; Gopalakrishnan, Mohan

    2007-01-01

    One of the reasons for rising health care costs is medical errors, a majority of which result from faulty systems and processes. Health care in the past has used process-based initiatives such as Total Quality Management, Continuous Quality Improvement, and Six Sigma to reduce errors. These initiatives to redesign health care, reduce errors, and improve overall efficiency and customer satisfaction have had moderate success. Current trend is to apply the successful Toyota Production System (TPS) to health care since its organizing principles have led to tremendous improvement in productivity and quality for Toyota and other businesses that have adapted them. This article presents insights on the effectiveness of TPS principles in health care and the challenges that lie ahead in successfully integrating this approach with other quality initiatives.

  8. Disclosure of Medical Errors in Oman

    PubMed Central

    Norrish, Mark I. K.

    2015-01-01

    Objectives: This study aimed to provide insight into the preferences for and perceptions of medical error disclosure (MED) by members of the public in Oman. Methods: Between January and June 2012, an online survey was used to collect responses from 205 members of the public across five governorates of Oman. Results: A disclosure gap was revealed between the respondents’ preferences for MED and perceived current MED practices in Oman. This disclosure gap extended to both the type of error and the person most likely to disclose the error. Errors resulting in patient harm were found to have a strong influence on individuals’ perceived quality of care. In addition, full disclosure was found to be highly valued by respondents and able to mitigate for a perceived lack of care in cases where medical errors led to damages. Conclusion: The perceived disclosure gap between respondents’ MED preferences and perceptions of current MED practices in Oman needs to be addressed in order to increase public confidence in the national health care system. PMID:26052463

  9. Modeling human tracking error in several different anti-tank systems

    NASA Technical Reports Server (NTRS)

    Kleinman, D. L.

    1981-01-01

    An optimal control model for generating time histories of human tracking errors in antitank systems is outlined. Monte Carlo simulations of human operator responses for three Army antitank systems are compared. System/manipulator dependent data comparisons reflecting human operator limitations in perceiving displayed quantities and executing intended control motions are presented. Motor noise parameters are also discussed.

  10. [Using some modern mathematical models of postmortem cooling of the human body for the time of death determination].

    PubMed

    Vavilov, A Iu; Viter, V I

    2007-01-01

    Mathematical questions of data errors of modern thermometrical models of postmortem cooling of the human body are considered. The main diagnostic areas used for thermometry are analyzed to minimize these data errors. The authors propose practical recommendations to decrease data errors of determination of prescription of death coming.

  11. The Hinton train disaster.

    PubMed

    Smiley, A M

    1990-10-01

    In February of 1986 a head-on collision occurred between a freight train and a passenger train in western Canada killing 23 people and causing over $30 million of damage. A Commission of Inquiry appointed by the Canadian government concluded that human error was the major reason for the collision. This report discusses the factors contributing to the human error: mainly poor work-rest schedules, the monotonous nature of the train driving task, insufficient information about train movements, and the inadequate backup systems in case of human error.

  12. A Conceptual Framework for Predicting Error in Complex Human-Machine Environments

    NASA Technical Reports Server (NTRS)

    Freed, Michael; Remington, Roger; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    We present a Goals, Operators, Methods, and Selection Rules-Model Human Processor (GOMS-MHP) style model-based approach to the problem of predicting human habit capture errors. Habit captures occur when the model fails to allocate limited cognitive resources to retrieve task-relevant information from memory. Lacking the unretrieved information, decision mechanisms act in accordance with implicit default assumptions, resulting in error when relied upon assumptions prove incorrect. The model helps interface designers identify situations in which such failures are especially likely.

  13. STAMP-Based HRA Considering Causality Within a Sociotechnical System: A Case of Minuteman III Missile Accident.

    PubMed

    Rong, Hao; Tian, Jin

    2015-05-01

    The study contributes to human reliability analysis (HRA) by proposing a method that focuses more on human error causality within a sociotechnical system, illustrating its rationality and feasibility by using a case of the Minuteman (MM) III missile accident. Due to the complexity and dynamics within a sociotechnical system, previous analyses of accidents involving human and organizational factors clearly demonstrated that the methods using a sequential accident model are inadequate to analyze human error within a sociotechnical system. System-theoretic accident model and processes (STAMP) was used to develop a universal framework of human error causal analysis. To elaborate the causal relationships and demonstrate the dynamics of human error, system dynamics (SD) modeling was conducted based on the framework. A total of 41 contributing factors, categorized into four types of human error, were identified through the STAMP-based analysis. All factors are related to a broad view of sociotechnical systems, and more comprehensive than the causation presented in the accident investigation report issued officially. Recommendations regarding both technical and managerial improvement for a lower risk of the accident are proposed. The interests of an interdisciplinary approach provide complementary support between system safety and human factors. The integrated method based on STAMP and SD model contributes to HRA effectively. The proposed method will be beneficial to HRA, risk assessment, and control of the MM III operating process, as well as other sociotechnical systems. © 2014, Human Factors and Ergonomics Society.

  14. Antioxidant supplementation ameliorates molecular deficits in Smith-Lemli-Opitz Syndrome (SLOS)

    PubMed Central

    Korade, Zeljka; Xu, Libin; Harrison, Fiona E.; Ahsen, Refayat; Hart, Sarah E; Folkes, Oakleigh M; Mirnics, Karoly; Porter, Ned A

    2013-01-01

    Background Smith-Lemli-Opitz syndrome (SLOS) is an inborn error of cholesterol biosynthesis characterized by diminished cholesterol and increased 7-dehydrocholesterol (7-DHC) levels. 7-DHC is highly reactive, giving rise to biologically active oxysterols. Methods 7-DHC-derived oxysterols were measured in fibroblasts from SLOS patients and an in vivo SLOS rodent model using HPLC-MS-MS. Expression of lipid biosynthesis genes was ascertained by qPCR and Western blot. The effects of an antioxidant mixture, vitamin A, coenzyme Q10, vitamin C and vitamin E were evaluated for their potential to reduce formation of 7-DHC oxysterols in fibroblast from SLOS patients. Finally, the effect of maternal feeding of vitamin E enriched diet was ascertained in the brain and liver of newborn SLOS mice. Results In cultured human SLOS fibroblasts the antioxidant mixture led to decreased levels of the 7-DHC-derived oxysterol, DHCEO. Furthermore, gene expression changes in SLOS human fibroblasts were normalized with antioxidant treatment. The active ingredient appeared to be vitamin E, as even at low concentrations, it significantly decreased DHCEO levels. In addition, analyzing a mouse SLOS model revealed that feeding a vitamin E enriched diet to pregnant females led to a decrease in oxysterol formation in brain and liver tissues of the newborn Dhcr7-knockout pups. Conclusions Considering the adverse effects of 7-DHC-derived oxysterols in neuronal and glial cultures, and the positive effects of antioxidants in patient cell cultures and the transgenic mouse model, we believe that preventing formation of 7-DHC oxysterols is critical for countering the detrimental effects of Dhcr7 mutations. PMID:23896203

  15. Spectral design flexibility of LED brings better life

    NASA Astrophysics Data System (ADS)

    Ou, Haiyan; Corell, Dennis; Ou, Yiyu; Poulsen, Peter B.; Dam-Hansen, Carsten; Petersen, Paul-Michael

    2012-03-01

    Light-emitting diodes (LEDs) are penetrating into the huge market of general lighting because they are energy saving and environmentally friendly. The big advantage of LED light sources, compared to traditional incandescent lamps and fluorescent light tubes, is the flexible spectral design to make white light using different color mixing schemes. The spectral design flexibility of white LED light sources will promote them for novel applications to improve the life quality of human beings. As an initial exploration to make use of the spectral design flexibility, we present an example: 'no blue' white LED light source for sufferers of disease Porphyria. An LED light source prototype, made of high brightness commercial LEDs applying an optical filter, was tested by a patient suffering from Porphyria. Preliminary results have shown that the sufferer could withstand the light source for much longer time than the standard light source. At last future perspectives on spectral design flexibility of LED light sources improving human being's life will be discussed, with focus on the light and health. The good health is ensured by the spectrum optimized so that vital hormones (melatonin and serotonin) are produced during times when they support human daily rhythm.

  16. Limb position sense, proprioceptive drift and muscle thixotropy at the human elbow joint

    PubMed Central

    Tsay, A; Savage, G; Allen, T J; Proske, U

    2014-01-01

    These experiments on the human forearm are based on the hypothesis that drift in the perceived position of a limb over time can be explained by receptor adaptation. Limb position sense was measured in 39 blindfolded subjects using a forearm-matching task. A property of muscle, its thixotropy, a contraction history-dependent passive stiffness, was exploited to place muscle receptors of elbow muscles in a defined state. After the arm had been held flexed and elbow flexors contracted, we observed time-dependent changes in the perceived position of the reference arm by an average of 2.8° in the direction of elbow flexion over 30 s (Experiment 1). The direction of the drift reversed after the arm had been extended and elbow extensors contracted, with a mean shift of 3.5° over 30 s in the direction of elbow extension (Experiment 2). The time-dependent changes could be abolished by conditioning elbow flexors and extensors in the reference arm at the test angle, although this led to large position errors during matching (±10°), depending on how the indicator arm had been conditioned (Experiments 3 and 4). When slack was introduced in the elbow muscles of both arms, by shortening muscles after the conditioning contraction, matching errors became small and there was no drift in position sense (Experiments 5 and 6). These experiments argue for a receptor-based mechanism for proprioceptive drift and suggest that to align the two forearms, the brain monitors the difference between the afferent signals from the two arms. PMID:24665096

  17. Nature of the refractive errors in rhesus monkeys (Macaca mulatta) with experimentally induced ametropias.

    PubMed

    Qiao-Grider, Ying; Hung, Li-Fang; Kee, Chea-Su; Ramamirtham, Ramkumar; Smith, Earl L

    2010-08-23

    We analyzed the contribution of individual ocular components to vision-induced ametropias in 210 rhesus monkeys. The primary contribution to refractive-error development came from vitreous chamber depth; a minor contribution from corneal power was also detected. However, there was no systematic relationship between refractive error and anterior chamber depth or between refractive error and any crystalline lens parameter. Our results are in good agreement with previous studies in humans, suggesting that the refractive errors commonly observed in humans are created by vision-dependent mechanisms that are similar to those operating in monkeys. This concordance emphasizes the applicability of rhesus monkeys in refractive-error studies. Copyright 2010 Elsevier Ltd. All rights reserved.

  18. Nature of the Refractive Errors in Rhesus Monkeys (Macaca mulatta) with Experimentally Induced Ametropias

    PubMed Central

    Qiao-Grider, Ying; Hung, Li-Fang; Kee, Chea-su; Ramamirtham, Ramkumar; Smith, Earl L.

    2010-01-01

    We analyzed the contribution of individual ocular components to vision-induced ametropias in 210 rhesus monkeys. The primary contribution to refractive-error development came from vitreous chamber depth; a minor contribution from corneal power was also detected. However, there was no systematic relationship between refractive error and anterior chamber depth or between refractive error and any crystalline lens parameter. Our results are in good agreement with previous studies in humans, suggesting that the refractive errors commonly observed in humans are created by vision-dependent mechanisms that are similar to those operating in monkeys. This concordance emphasizes the applicability of rhesus monkeys in refractive-error studies. PMID:20600237

  19. Errors in the Extra-Analytical Phases of Clinical Chemistry Laboratory Testing.

    PubMed

    Zemlin, Annalise E

    2018-04-01

    The total testing process consists of various phases from the pre-preanalytical to the post-postanalytical phase, the so-called brain-to-brain loop. With improvements in analytical techniques and efficient quality control programmes, most laboratory errors now occur in the extra-analytical phases. There has been recent interest in these errors with numerous publications highlighting their effect on service delivery, patient care and cost. This interest has led to the formation of various working groups whose mission is to develop standardized quality indicators which can be used to measure the performance of service of these phases. This will eventually lead to the development of external quality assessment schemes to monitor these phases in agreement with ISO15189:2012 recommendations. This review focuses on potential errors in the extra-analytical phases of clinical chemistry laboratory testing, some of the studies performed to assess the severity and impact of these errors and processes that are in place to address these errors. The aim of this review is to highlight the importance of these errors for the requesting clinician.

  20. Refrigerated display case lighting with LEDs

    NASA Astrophysics Data System (ADS)

    Raghavan, Ramesh; Narendran, Nadarajah

    2002-11-01

    The rapid development of high brightness light emitting diodes (LEDs) has triggered many applications, especially in the area of display lighting. This paper focuses on the application of white LEDs in refrigerated display cases. The fluorescent lighting presently used in commercial refrigerators is inefficient in the application and also it provides poor lighting for merchandising. A laboratory human factors experiment was conducted to assess the preference for the different lighting systems, namely, fluorescent and LED. Two refrigerated display cases, one with the traditional fluorescent lighting system and the other with a prototype LED lighting system, were placed side-by-side in a laboratory setting. Illuminance measurements made within the two display cases showed that the lighting was more uniform with the LED system compared to the traditional fluorescent system. Sixteen human subjects participated in this study and rated their preference for the two lighting systems. The results show that human subjects strongly preferred the display case with the LED lighting. The authors of this manuscript believe a field study would be greatly beneficial to further confirm these results and to understand the relationship between preference and sales. Considering the luminous efficacy of white LEDs presently available in the marketplace, it is possible to develop a LED based lighting system for commercial refrigerators that is competitive with fluorescent lighting system in terms of energy use. The LED based lighting would provide better lighting than traditional fluorescent lighting.

  1. Broadband Radiometric LED Measurements

    PubMed Central

    Eppeldauer, G. P.; Cooksey, C. C.; Yoon, H. W.; Hanssen, L. M.; Podobedov, V. B.; Vest, R. E.; Arp, U.; Miller, C. C.

    2017-01-01

    At present, broadband radiometric measurements of LEDs with uniform and low-uncertainty results are not available. Currently, either complicated and expensive spectral radiometric measurements or broadband photometric LED measurements are used. The broadband photometric measurements are based on the CIE standardized V(λ) function, which cannot be used in the UV range and leads to large errors when blue or red LEDs are measured in its wings, where the realization is always poor. Reference irradiance meters with spectrally constant response and high-intensity LED irradiance sources were developed here to implement the previously suggested broadband radiometric LED measurement procedure [1, 2]. Using a detector with spectrally constant response, the broadband radiometric quantities of any LEDs or LED groups can be simply measured with low uncertainty without using any source standard. The spectral flatness of filtered-Si detectors and low-noise pyroelectric radiometers are compared. Examples are given for integrated irradiance measurement of UV and blue LED sources using the here introduced reference (standard) pyroelectric irradiance meters. For validation, the broadband measured integrated irradiance of several LED-365 sources were compared with the spectrally determined integrated irradiance derived from an FEL spectral irradiance lamp-standard. Integrated responsivity transfer from the reference irradiance meter to transfer standard and field UV irradiance meters is discussed. PMID:28649167

  2. Broadband Radiometric LED Measurements.

    PubMed

    Eppeldauer, G P; Cooksey, C C; Yoon, H W; Hanssen, L M; Podobedov, V B; Vest, R E; Arp, U; Miller, C C

    2016-01-01

    At present, broadband radiometric measurements of LEDs with uniform and low-uncertainty results are not available. Currently, either complicated and expensive spectral radiometric measurements or broadband photometric LED measurements are used. The broadband photometric measurements are based on the CIE standardized V(λ) function, which cannot be used in the UV range and leads to large errors when blue or red LEDs are measured in its wings, where the realization is always poor. Reference irradiance meters with spectrally constant response and high-intensity LED irradiance sources were developed here to implement the previously suggested broadband radiometric LED measurement procedure [1, 2]. Using a detector with spectrally constant response, the broadband radiometric quantities of any LEDs or LED groups can be simply measured with low uncertainty without using any source standard. The spectral flatness of filtered-Si detectors and low-noise pyroelectric radiometers are compared. Examples are given for integrated irradiance measurement of UV and blue LED sources using the here introduced reference (standard) pyroelectric irradiance meters. For validation, the broadband measured integrated irradiance of several LED-365 sources were compared with the spectrally determined integrated irradiance derived from an FEL spectral irradiance lamp-standard. Integrated responsivity transfer from the reference irradiance meter to transfer standard and field UV irradiance meters is discussed.

  3. Satellite Calibration With LED Detectors at Mud Lake

    NASA Technical Reports Server (NTRS)

    Hiller, Jonathan D.

    2005-01-01

    Earth-monitoring instruments in orbit must be routinely calibrated in order to accurately analyze the data obtained. By comparing radiometric measurements taken on the ground in conjunction with a satellite overpass, calibration curves are derived for an orbiting instrument. A permanent, automated facility is planned for Mud Lake, Nevada (a large, homogeneous, dry lakebed) for this purpose. Because some orbiting instruments have low resolution (250 meters per pixel), inexpensive radiometers using LEDs as sensors are being developed to array widely over the lakebed. LEDs are ideal because they are inexpensive, reliable, and sense over a narrow bandwidth. By obtaining and averaging widespread data, errors are reduced and long-term surface changes can be more accurately observed.

  4. Human Error as an Emergent Property of Action Selection and Task Place-Holding.

    PubMed

    Tamborello, Franklin P; Trafton, J Gregory

    2017-05-01

    A computational process model could explain how the dynamic interaction of human cognitive mechanisms produces each of multiple error types. With increasing capability and complexity of technological systems, the potential severity of consequences of human error is magnified. Interruption greatly increases people's error rates, as does the presence of other information to maintain in an active state. The model executed as a software-instantiated Monte Carlo simulation. It drew on theoretical constructs such as associative spreading activation for prospective memory, explicit rehearsal strategies as a deliberate cognitive operation to aid retrospective memory, and decay. The model replicated the 30% effect of interruptions on postcompletion error in Ratwani and Trafton's Stock Trader task, the 45% interaction effect on postcompletion error of working memory capacity and working memory load from Byrne and Bovair's Phaser Task, as well as the 5% perseveration and 3% omission effects of interruption from the UNRAVEL Task. Error classes including perseveration, omission, and postcompletion error fall naturally out of the theory. The model explains post-interruption error in terms of task state representation and priming for recall of subsequent steps. Its performance suggests that task environments providing more cues to current task state will mitigate error caused by interruption. For example, interfaces could provide labeled progress indicators or facilities for operators to quickly write notes about their task states when interrupted.

  5. Retraction notice to "Fungal diversity is negatively affected by habitat fragmentation: a meta-analysis" [Current Opinion in Microbiology 37 (2017) 61-66].

    PubMed

    Grilli, G; Longo, S; Huais, P Y; Pereyra, M; Verga, E; Urcelay, C; Galetto, L

    2017-06-01

    This article has been retracted: please see Elsevier Policy on Article Withdrawal (https://www.elsevier.com/about/our-business/policies/article-withdrawal). This article that has already been published in , http://dx.doi.org/10.1016/j.mib.2017.03.015 has been withdrawn at the request of the editor and publisher. The publisher regrets that an error occurred which led to the premature publication of this paper. This error bears no reflection on the article or its authors. The publisher apologizes to the authors and the readers for this unfortunate error. Copyright © 2017.

  6. Sensitivity, optimal scaling and minimum roundoff errors in flexible structure models

    NASA Technical Reports Server (NTRS)

    Skelton, Robert E.

    1987-01-01

    Traditional modeling notions presume the existence of a truth model that relates the input to the output, without advanced knowledge of the input. This has led to the evolution of education and research approaches (including the available control and robustness theories) that treat the modeling and control design as separate problems. The paper explores the subtleties of this presumption that the modeling and control problems are separable. A detailed study of the nature of modeling errors is useful to gain insight into the limitations of traditional control and identification points of view. Modeling errors need not be small but simply appropriate for control design. Furthermore, the modeling and control design processes are inevitably iterative in nature.

  7. 77 FR 34989 - Notice of Inventory Completion: U.S. Department of the Interior, Bureau of Indian Affairs...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-12

    ... search through the survey collection led to the discovery of human bone fragments representing, at... survey collection led to the discovery of three human bone fragments representing, at minimum, one... discovery of one human bone fragment representing, at minimum, one individual. No associated funerary...

  8. Oxidative stress and alterations in DNA methylation: two sides of the same coin in reproduction.

    PubMed

    Menezo, Yves J R; Silvestris, Erica; Dale, Brian; Elder, Kay

    2016-12-01

    The negative effect of oxidative stress on the human reproductive process is no longer a matter for debate. Oxidative stress affects female and male gametes and the developmental capacity of embryos. Its effect can continue through late stages of pregnancy. Metabolic disorders and psychiatric problems can also be caued by DNA methylation and epigenetic errors. Age has a negative effect on oxidative stress and DNA methylation, and recent observations suggest that older men are at risk of transmitting epigenetic disorders to their offspring. Environmental endocrine disruptors can also increase oxidative stress and methylation errors. Oxidative stress and DNA methylation feature a common denominator: the one carbon cycle. This important metabolic pathway stimulates glutathione synthesis and recycles homocysteine, a molecule that interferes with the process of methylation. Glutathione plays a pivotal role during oocyte activation, protecting against reactive oxygen species. Assisted reproductive techniques may exacerbate defects in methylation and epigenesis. Antioxidant supplements are proposed to reduce the risk of potentially harmful effects, but their use has failed to prevent problems and may sometimes be detrimental. New concepts reveal a significant correlation between oxidative stress, methylation processes and epigenesis, and have led to changes in media composition with positive preliminary clinical consequences. Copyright © 2016 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.

  9. Application of failure mode and effect analysis in an assisted reproduction technology laboratory.

    PubMed

    Intra, Giulia; Alteri, Alessandra; Corti, Laura; Rabellotti, Elisa; Papaleo, Enrico; Restelli, Liliana; Biondo, Stefania; Garancini, Maria Paola; Candiani, Massimo; Viganò, Paola

    2016-08-01

    Assisted reproduction technology laboratories have a very high degree of complexity. Mismatches of gametes or embryos can occur, with catastrophic consequences for patients. To minimize the risk of error, a multi-institutional working group applied failure mode and effects analysis (FMEA) to each critical activity/step as a method of risk assessment. This analysis led to the identification of the potential failure modes, together with their causes and effects, using the risk priority number (RPN) scoring system. In total, 11 individual steps and 68 different potential failure modes were identified. The highest ranked failure modes, with an RPN score of 25, encompassed 17 failures and pertained to "patient mismatch" and "biological sample mismatch". The maximum reduction in risk, with RPN reduced from 25 to 5, was mostly related to the introduction of witnessing. The critical failure modes in sample processing were improved by 50% in the RPN by focusing on staff training. Three indicators of FMEA success, based on technical skill, competence and traceability, have been evaluated after FMEA implementation. Witnessing by a second human operator should be introduced in the laboratory to avoid sample mix-ups. These findings confirm that FMEA can effectively reduce errors in assisted reproduction technology laboratories. Copyright © 2016 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.

  10. Multiple transfer standard for calibration and characterization of test setups for LED lamps and luminaires in industry

    NASA Astrophysics Data System (ADS)

    Sperling, A.; Meyer, M.; Pendsa, S.; Jordan, W.; Revtova, E.; Poikonen, T.; Renoux, D.; Blattner, P.

    2018-04-01

    Proper characterization of test setups used in industry for testing and traceable measurement of lighting devices by the substitution method is an important task. According to new standards for testing LED lamps, luminaires and modules, uncertainty budgets are requested because in many cases the properties of the device under test differ from the transfer standard used, which may cause significant errors, for example if a LED-based lamp is tested or calibrated in an integrating sphere which was calibrated with a tungsten lamp. This paper introduces a multiple transfer standard, which was designed not only to transfer a single calibration value (e.g. luminous flux) but also to characterize test setups used for LED measurements with additional provided and calibrated output features to enable the application of the new standards.

  11. Simultaneous Control of Error Rates in fMRI Data Analysis

    PubMed Central

    Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David

    2015-01-01

    The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the Likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to ‘cleaner’ looking brain maps and operationally superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain. PMID:26272730

  12. Human error and the search for blame

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1989-01-01

    Human error is a frequent topic in discussions about risks in using computer systems. A rational analysis of human error leads through the consideration of mistakes to standards that designers use to avoid mistakes that lead to known breakdowns. The irrational side, however, is more interesting. It conditions people to think that breakdowns are inherently wrong and that there is ultimately someone who is responsible. This leads to a search for someone to blame which diverts attention from: learning from the mistakes; seeing the limitations of current engineering methodology; and improving the discourse of design.

  13. Personality factors in flight operations. Volume 1: Leader characteristics and crew performance in a full-mission air transport simulation

    NASA Technical Reports Server (NTRS)

    Chidester, Thomas R.; Kanki, Barbara G.; Foushee, H. Clayton; Dickinson, Cortlandt L.; Bowles, Stephen V.

    1990-01-01

    Crew effectiveness is a joint product of the piloting skills, attitudes, and personality characteristics of team members. As obvious as this point might seem, both traditional approaches to optimizing crew performance and more recent training development highlighting crew coordination have emphasized only the skill and attitudinal dimensions. This volume is the first in a series of papers on this simulation. A subsequent volume will focus on patterns of communication within crews. The results of a full-mission simulation research study assessing the impact of individual personality on crew performance is reported. Using a selection algorithm described in previous research, captains were classified as fitting one of three profiles along a battery of personality assessment scales. The performances of 23 crews led by captains fitting each profile were contrasted over a one-and-one-half-day simulated trip. Crews led by captains fitting a positive Instrumental-Expressive profile (high achievement motivation and interpersonal skill) were consistently effective and made fewer errors. Crews led by captains fitting a Negative Expressive profile (below average achievement motivation, negative expressive style, such as complaining) were consistently less effective and made more errors. Crews led by captains fitting a Negative Instrumental profile (high levels of competitiveness, verbal aggressiveness, and impatience and irritability) were less effective on the first day but equal to the best on the second day. These results underscore the importance of stable personality variables as predictors of team coordination and performance.

  14. Why do adult dogs (Canis familiaris) commit the A-not-B search error?

    PubMed

    Sümegi, Zsófia; Kis, Anna; Miklósi, Ádám; Topál, József

    2014-02-01

    It has been recently reported that adult domestic dogs, like human infants, tend to commit perseverative search errors; that is, they select the previously rewarded empty location in Piagetian A-not-B search task because of the experimenter's ostensive communicative cues. There is, however, an ongoing debate over whether these findings reveal that dogs can use the human ostensive referential communication as a source of information or the phenomenon can be accounted for by "more simple" explanations like insufficient attention and learning based on local enhancement. In 2 experiments the authors systematically manipulated the type of human cueing (communicative or noncommunicative) adjacent to the A hiding place during both the A and B trials. Results highlight 3 important aspects of the dogs' A-not-B error: (a) search errors are influenced to a certain extent by dogs' motivation to retrieve the toy object; (b) human communicative and noncommunicative signals have different error-inducing effects; and (3) communicative signals presented at the A hiding place during the B trials but not during the A trials play a crucial role in inducing the A-not-B error and it can be induced even without demonstrating repeated hiding events at location A. These findings further confirm the notion that perseverative search error, at least partially, reflects a "ready-to-obey" attitude in the dog rather than insufficient attention and/or working memory.

  15. #2 - An Empirical Assessment of Exposure Measurement Error ...

    EPA Pesticide Factsheets

    Background• Differing degrees of exposure error acrosspollutants• Previous focus on quantifying and accounting forexposure error in single-pollutant models• Examine exposure errors for multiple pollutantsand provide insights on the potential for bias andattenuation of effect estimates in single and bipollutantepidemiological models The National Exposure Research Laboratory (NERL) Human Exposure and Atmospheric Sciences Division (HEASD) conducts research in support of EPA mission to protect human health and the environment. HEASD research program supports Goal 1 (Clean Air) and Goal 4 (Healthy People) of EPA strategic plan. More specifically, our division conducts research to characterize the movement of pollutants from the source to contact with humans. Our multidisciplinary research program produces Methods, Measurements, and Models to identify relationships between and characterize processes that link source emissions, environmental concentrations, human exposures, and target-tissue dose. The impact of these tools is improved regulatory programs and policies for EPA.

  16. Precise Temperature Mapping of GaN-Based LEDs by Quantitative Infrared Micro-Thermography

    PubMed Central

    Chang, Ki Soo; Yang, Sun Choel; Kim, Jae-Young; Kook, Myung Ho; Ryu, Seon Young; Choi, Hae Young; Kim, Geon Hee

    2012-01-01

    A method of measuring the precise temperature distribution of GaN-based light-emitting diodes (LEDs) by quantitative infrared micro-thermography is reported. To reduce the calibration error, the same measuring conditions were used for both calibration and thermal imaging; calibration was conducted on a highly emissive black-painted area on a dummy sapphire wafer loaded near the LED wafer on a thermoelectric cooler mount. We used infrared thermal radiation images of the black-painted area on the dummy wafer and an unbiased LED wafer at two different temperatures to determine the factors that degrade the accuracy of temperature measurement, i.e., the non-uniform response of the instrument, superimposed offset radiation, reflected radiation, and emissivity map of the LED surface. By correcting these factors from the measured infrared thermal radiation images of biased LEDs, we determined a precise absolute temperature image. Consequently, we could observe from where the local self-heat emerges and how it distributes on the emitting area of the LEDs. The experimental results demonstrated that highly localized self-heating and a remarkable temperature gradient, which are detrimental to LED performance and reliability, arise near the p-contact edge of the LED surface at high injection levels owing to the current crowding effect. PMID:22666050

  17. [Study on the safety of blue light leak of LED].

    PubMed

    Shen, Chong-Yu; Xu, Zheng; Zhao, Su-Ling; Huang, Qing-Yu

    2014-02-01

    In this paper, the blue light properties of LED illumination devices have been investigated. Against the status quo of China's LED lighting, we measured the spectrum component of LED lamps and analyzed the photobiological safety under the current domestic and international standards GB/T 20145-2006/CIE S009/E: 2002 and IEC62471: 2006 standards as well as CTL-0744_2009-laser resolution, which provides the reference to the manufacture of LED lighting lamps as well as related safety standards and laws. If the radiance intensity of blue light in LED is lower than 100 W x m(-2) x Sr(-1), there is no harm to human eyes. LEDs will not cause harm to human eyes under normal use, but we should pay attention to the protection of special populations (children), and make sure that they avoid looking at a light source for a long time. The research has found that the blue-rich lamps can affect the human rule of work and rest, and therefore, the LED lamps with color temperature below 4 000 K and color rendering index of 80 are suitable for indoor use. At the same time, the lamps with different parameters should be selected according to the different distances.

  18. A pharmacist-led information technology intervention for medication errors (PINCER): a multicentre, cluster randomised, controlled trial and cost-effectiveness analysis

    PubMed Central

    Avery, Anthony J; Rodgers, Sarah; Cantrill, Judith A; Armstrong, Sarah; Cresswell, Kathrin; Eden, Martin; Elliott, Rachel A; Howard, Rachel; Kendrick, Denise; Morris, Caroline J; Prescott, Robin J; Swanwick, Glen; Franklin, Matthew; Putman, Koen; Boyd, Matthew; Sheikh, Aziz

    2012-01-01

    Summary Background Medication errors are common in primary care and are associated with considerable risk of patient harm. We tested whether a pharmacist-led, information technology-based intervention was more effective than simple feedback in reducing the number of patients at risk of measures related to hazardous prescribing and inadequate blood-test monitoring of medicines 6 months after the intervention. Methods In this pragmatic, cluster randomised trial general practices in the UK were stratified by research site and list size, and randomly assigned by a web-based randomisation service in block sizes of two or four to one of two groups. The practices were allocated to either computer-generated simple feedback for at-risk patients (control) or a pharmacist-led information technology intervention (PINCER), composed of feedback, educational outreach, and dedicated support. The allocation was masked to general practices, patients, pharmacists, researchers, and statisticians. Primary outcomes were the proportions of patients at 6 months after the intervention who had had any of three clinically important errors: non-selective non-steroidal anti-inflammatory drugs (NSAIDs) prescribed to those with a history of peptic ulcer without co-prescription of a proton-pump inhibitor; β blockers prescribed to those with a history of asthma; long-term prescription of angiotensin converting enzyme (ACE) inhibitor or loop diuretics to those 75 years or older without assessment of urea and electrolytes in the preceding 15 months. The cost per error avoided was estimated by incremental cost-effectiveness analysis. This study is registered with Controlled-Trials.com, number ISRCTN21785299. Findings 72 general practices with a combined list size of 480 942 patients were randomised. At 6 months' follow-up, patients in the PINCER group were significantly less likely to have been prescribed a non-selective NSAID if they had a history of peptic ulcer without gastroprotection (OR 0·58, 95% CI 0·38–0·89); a β blocker if they had asthma (0·73, 0·58–0·91); or an ACE inhibitor or loop diuretic without appropriate monitoring (0·51, 0·34–0·78). PINCER has a 95% probability of being cost effective if the decision-maker's ceiling willingness to pay reaches £75 per error avoided at 6 months. Interpretation The PINCER intervention is an effective method for reducing a range of medication errors in general practices with computerised clinical records. Funding Patient Safety Research Portfolio, Department of Health, England. PMID:22357106

  19. Improved compliance with the World Health Organization Surgical Safety Checklist is associated with reduced surgical specimen labelling errors.

    PubMed

    Martis, Walston R; Hannam, Jacqueline A; Lee, Tracey; Merry, Alan F; Mitchell, Simon J

    2016-09-09

    A new approach to administering the surgical safety checklist (SSC) at our institution using wall-mounted charts for each SSC domain coupled with migrated leadership among operating room (OR) sub-teams, led to improved compliance with the Sign Out domain. Since surgical specimens are reviewed at Sign Out, we aimed to quantify any related change in surgical specimen labelling errors. Prospectively maintained error logs for surgical specimens sent to pathology were examined for the six months before and after introduction of the new SSC administration paradigm. We recorded errors made in the labelling or completion of the specimen pot and on the specimen laboratory request form. Total error rates were calculated from the number of errors divided by total number of specimens. Rates from the two periods were compared using a chi square test. There were 19 errors in 4,760 specimens (rate 3.99/1,000) and eight errors in 5,065 specimens (rate 1.58/1,000) before and after the change in SSC administration paradigm (P=0.0225). Improved compliance with administering the Sign Out domain of the SSC can reduce surgical specimen errors. This finding provides further evidence that OR teams should optimise compliance with the SSC.

  20. Urban rail transit projects : forecast versus actual ridership and costs. final report

    DOT National Transportation Integrated Search

    1989-10-01

    Substantial errors in forecasting ridership and costs for the ten rail transit projects reviewed in this report, put forth the possibility that more accurate forecasts would have led decision-makers to select projects other than those reviewed in thi...

  1. 78 FR 18977 - Agency Information Collection Activities; Submission to OMB for Review and Approval; Comment...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-28

    ... preferred method), or by email to: [email protected] , or by mail to: EPA Docket Center (EPA/DC... in Agency burden is related to a mathematical error in the calculations, which led to double counting...

  2. Cutting the Cord: Discrimination and Command Responsibility in Autonomous Lethal Weapons

    DTIC Science & Technology

    2014-02-13

    machine responses to identical stimuli, and it was the job of a third party human “witness” to determine which participant was man and which was...machines may be error free, but there are potential benefits to be gained through autonomy if machines can meet or exceed human performance in...lieu of human operators and reap the benefits that autonomy provides. Human and Machine Error It would be foolish to assert that either humans

  3. Accurate chromatic control and color rendering optimization in LED lighting systems using junction temperature feedback

    NASA Astrophysics Data System (ADS)

    Sisto, Marco Michele; Gauvin, Jonny

    2014-09-01

    Accurate color control of LED lighting systems is a challenging task: noticeable chromaticity shifts are commonly observed in mixed-color and phosphor converted LEDs due to intensity dimming. Furthermore, the emitted color varies with the LED temperature. We present a novel color control method for tri-chromatic and tetra-chromatic LEDs, which enable to set and maintain the LED emission at a target color, or combination of correlated color temperature (CCT) and intensity. The LED color point is maintained over variations in the LED junctions' temperatures and intensity dimming levels. The method does not require color feedback sensors, so to minimize system complexity and cost, but relies on estimation of the LED junctions' temperatures from the junction voltages. If operated with tetra-chromatic LEDs, the method allows meeting an additional optimization criterion: for example, the maximization of a color rendering metric like the Color Rendering Index (CRI) or the Color Quality Scale (CQS), thus providing a high quality and clarity of colors on the surface illuminated by the LED. We demonstrate the control of a RGBW LED at target D65 white point with CIELAB color difference metric triangle;a,bE < 1 for simultaneous variations of flux from approximately 30 lm to 100 lm and LED heat sink temperature from 25°C to 58°C. In the same conditions, we demonstrate a CCT error <1%. Furthermore, the method allows varying the LED CCT from 5500K to 8000K while maintaining luminance within 1% of target. Further work is ongoing to evaluate the stability of the method over LED aging.

  4. Human error identification for laparoscopic surgery: Development of a motion economy perspective.

    PubMed

    Al-Hakim, Latif; Sevdalis, Nick; Maiping, Tanaphon; Watanachote, Damrongpan; Sengupta, Shomik; Dissaranan, Charuspong

    2015-09-01

    This study postulates that traditional human error identification techniques fail to consider motion economy principles and, accordingly, their applicability in operating theatres may be limited. This study addresses this gap in the literature with a dual aim. First, it identifies the principles of motion economy that suit the operative environment and second, it develops a new error mode taxonomy for human error identification techniques which recognises motion economy deficiencies affecting the performance of surgeons and predisposing them to errors. A total of 30 principles of motion economy were developed and categorised into five areas. A hierarchical task analysis was used to break down main tasks of a urological laparoscopic surgery (hand-assisted laparoscopic nephrectomy) to their elements and the new taxonomy was used to identify errors and their root causes resulting from violation of motion economy principles. The approach was prospectively tested in 12 observed laparoscopic surgeries performed by 5 experienced surgeons. A total of 86 errors were identified and linked to the motion economy deficiencies. Results indicate the developed methodology is promising. Our methodology allows error prevention in surgery and the developed set of motion economy principles could be useful for training surgeons on motion economy principles. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  5. Managing human error in aviation.

    PubMed

    Helmreich, R L

    1997-05-01

    Crew resource management (CRM) programs were developed to address team and leadership aspects of piloting modern airplanes. The goal is to reduce errors through team work. Human factors research and social, cognitive, and organizational psychology are used to develop programs tailored for individual airlines. Flight crews study accident case histories, group dynamics, and human error. Simulators provide pilots with the opportunity to solve complex flight problems. CRM in the simulator is called line-oriented flight training (LOFT). In automated cockpits CRM promotes the idea of automation as a crew member. Cultural aspects of aviation include professional, business, and national culture. The aviation CRM model has been adapted for training surgeons and operating room staff in human factors.

  6. Single-exposure quantitative phase imaging in color-coded LED microscopy.

    PubMed

    Lee, Wonchan; Jung, Daeseong; Ryu, Suho; Joo, Chulmin

    2017-04-03

    We demonstrate single-shot quantitative phase imaging (QPI) in a platform of color-coded LED microscopy (cLEDscope). The light source in a conventional microscope is replaced by a circular LED pattern that is trisected into subregions with equal area, assigned to red, green, and blue colors. Image acquisition with a color image sensor and subsequent computation based on weak object transfer functions allow for the QPI of a transparent specimen. We also provide a correction method for color-leakage, which may be encountered in implementing our method with consumer-grade LEDs and image sensors. Most commercially available LEDs and image sensors do not provide spectrally isolated emissions and pixel responses, generating significant error in phase estimation in our method. We describe the correction scheme for this color-leakage issue, and demonstrate improved phase measurement accuracy. The computational model and single-exposure QPI capability of our method are presented by showing images of calibrated phase samples and cellular specimens.

  7. Empirical Analysis of Systematic Communication Errors.

    DTIC Science & Technology

    1981-09-01

    human o~ . .... 8 components in communication systems. (Systematic errors were defined to be those that occur regularly in human communication links...phase of the human communication process and focuses on the linkage between a specific piece of information (and the receiver) and the transmission...communication flow. (2) Exchange. Exchange is the next phase in human communication and entails a concerted effort on the part of the sender and receiver to share

  8. HRA Aerospace Challenges

    NASA Technical Reports Server (NTRS)

    DeMott, Diana

    2013-01-01

    Compared to equipment designed to perform the same function over and over, humans are just not as reliable. Computers and machines perform the same action in the same way repeatedly getting the same result, unless equipment fails or a human interferes. Humans who are supposed to perform the same actions repeatedly often perform them incorrectly due to a variety of issues including: stress, fatigue, illness, lack of training, distraction, acting at the wrong time, not acting when they should, not following procedures, misinterpreting information or inattention to detail. Why not use robots and automatic controls exclusively if human error is so common? In an emergency or off normal situation that the computer, robotic element, or automatic control system is not designed to respond to, the result is failure unless a human can intervene. The human in the loop may be more likely to cause an error, but is also more likely to catch the error and correct it. When it comes to unexpected situations, or performing multiple tasks outside the defined mission parameters, humans are the only viable alternative. Human Reliability Assessments (HRA) identifies ways to improve human performance and reliability and can lead to improvements in systems designed to interact with humans. Understanding the context of the situation that can lead to human errors, which include taking the wrong action, no action or making bad decisions provides additional information to mitigate risks. With improved human reliability comes reduced risk for the overall operation or project.

  9. Identifying Human Factors Issues in Aircraft Maintenance Operations

    NASA Technical Reports Server (NTRS)

    Veinott, Elizabeth S.; Kanki, Barbara G.; Shafto, Michael G. (Technical Monitor)

    1995-01-01

    Maintenance operations incidents submitted to the Aviation Safety Reporting System (ASRS) between 1986-1992 were systematically analyzed in order to identify issues relevant to human factors and crew coordination. This exploratory analysis involved 95 ASRS reports which represented a wide range of maintenance incidents. The reports were coded and analyzed according to the type of error (e.g, wrong part, procedural error, non-procedural error), contributing factors (e.g., individual, within-team, cross-team, procedure, tools), result of the error (e.g., aircraft damage or not) as well as the operational impact (e.g., aircraft flown to destination, air return, delay at gate). The main findings indicate that procedural errors were most common (48.4%) and that individual and team actions contributed to the errors in more than 50% of the cases. As for operational results, most errors were either corrected after landing at the destination (51.6%) or required the flight crew to stop enroute (29.5%). Interactions among these variables are also discussed. This analysis is a first step toward developing a taxonomy of crew coordination problems in maintenance. By understanding what variables are important and how they are interrelated, we may develop intervention strategies that are better tailored to the human factor issues involved.

  10. Managing Errors to Reduce Accidents in High Consequence Networked Information Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ganter, J.H.

    1999-02-01

    Computers have always helped to amplify and propagate errors made by people. The emergence of Networked Information Systems (NISs), which allow people and systems to quickly interact worldwide, has made understanding and minimizing human error more critical. This paper applies concepts from system safety to analyze how hazards (from hackers to power disruptions) penetrate NIS defenses (e.g., firewalls and operating systems) to cause accidents. Such events usually result from both active, easily identified failures and more subtle latent conditions that have resided in the system for long periods. Both active failures and latent conditions result from human errors. We classifymore » these into several types (slips, lapses, mistakes, etc.) and provide NIS examples of how they occur. Next we examine error minimization throughout the NIS lifecycle, from design through operation to reengineering. At each stage, steps can be taken to minimize the occurrence and effects of human errors. These include defensive design philosophies, architectural patterns to guide developers, and collaborative design that incorporates operational experiences and surprises into design efforts. We conclude by looking at three aspects of NISs that will cause continuing challenges in error and accident management: immaturity of the industry, limited risk perception, and resource tradeoffs.« less

  11. Investigating mode errors on automated flight decks: illustrating the problem-driven, cumulative, and interdisciplinary nature of human factors research.

    PubMed

    Sarter, Nadine

    2008-06-01

    The goal of this article is to illustrate the problem-driven, cumulative, and highly interdisciplinary nature of human factors research by providing a brief overview of the work on mode errors on modern flight decks over the past two decades. Mode errors on modem flight decks were first reported in the late 1980s. Poor feedback, inadequate mental models of the automation, and the high degree of coupling and complexity of flight deck systems were identified as main contributors to these breakdowns in human-automation interaction. Various improvements of design, training, and procedures were proposed to address these issues. The author describes when and why the problem of mode errors surfaced, summarizes complementary research activities that helped identify and understand the contributing factors to mode errors, and describes some countermeasures that have been developed in recent years. This brief review illustrates how one particular human factors problem in the aviation domain enabled various disciplines and methodological approaches to contribute to a better understanding of, as well as provide better support for, effective human-automation coordination. Converging operations and interdisciplinary collaboration over an extended period of time are hallmarks of successful human factors research. The reported body of research can serve as a model for future research and as a teaching tool for students in this field of work.

  12. The Swiss cheese model of adverse event occurrence--Closing the holes.

    PubMed

    Stein, James E; Heiss, Kurt

    2015-12-01

    Traditional surgical attitude regarding error and complications has focused on individual failings. Human factors research has brought new and significant insights into the occurrence of error in healthcare, helping us identify systemic problems that injure patients while enhancing individual accountability and teamwork. This article introduces human factors science and its applicability to teamwork, surgical culture, medical error, and individual accountability. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Behind Human Error: Cognitive Systems, Computers and Hindsight

    DTIC Science & Technology

    1994-12-01

    evaluations • Organize and/or conduct workshops and conferences CSERIAC is a Department of Defense Information Analysis Cen- ter sponsored by the Defense...Process 185 Neutral Observer Criteria 191 Error Analysis as Causal Judgment 193 Error as Information 195 A Fundamental Surprise 195 What is Human...Kahnemann, 1974), and in risk analysis (Dougherty and Fragola, 1990). The discussions have continued in a wide variety of forums, includ- ing the

  14. Monte Carlo simulation of expert judgments on human errors in chemical analysis--a case study of ICP-MS.

    PubMed

    Kuselman, Ilya; Pennecchi, Francesca; Epstein, Malka; Fajgelj, Ales; Ellison, Stephen L R

    2014-12-01

    Monte Carlo simulation of expert judgments on human errors in a chemical analysis was used for determination of distributions of the error quantification scores (scores of likelihood and severity, and scores of effectiveness of a laboratory quality system in prevention of the errors). The simulation was based on modeling of an expert behavior: confident, reasonably doubting and irresolute expert judgments were taken into account by means of different probability mass functions (pmfs). As a case study, 36 scenarios of human errors which may occur in elemental analysis of geological samples by ICP-MS were examined. Characteristics of the score distributions for three pmfs of an expert behavior were compared. Variability of the scores, as standard deviation of the simulated score values from the distribution mean, was used for assessment of the score robustness. A range of the score values, calculated directly from elicited data and simulated by a Monte Carlo method for different pmfs, was also discussed from the robustness point of view. It was shown that robustness of the scores, obtained in the case study, can be assessed as satisfactory for the quality risk management and improvement of a laboratory quality system against human errors. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Consistency errors in p-values reported in Spanish psychology journals.

    PubMed

    Caperos, José Manuel; Pardo, Antonio

    2013-01-01

    Recent reviews have drawn attention to frequent consistency errors when reporting statistical results. We have reviewed the statistical results reported in 186 articles published in four Spanish psychology journals. Of these articles, 102 contained at least one of the statistics selected for our study: Fisher-F , Student-t and Pearson-c 2 . Out of the 1,212 complete statistics reviewed, 12.2% presented a consistency error, meaning that the reported p-value did not correspond to the reported value of the statistic and its degrees of freedom. In 2.3% of the cases, the correct calculation would have led to a different conclusion than the reported one. In terms of articles, 48% included at least one consistency error, and 17.6% would have to change at least one conclusion. In meta-analytical terms, with a focus on effect size, consistency errors can be considered substantial in 9.5% of the cases. These results imply a need to improve the quality and precision with which statistical results are reported in Spanish psychology journals.

  16. The threshold vs LNT showdown: Dose rate findings exposed flaws in the LNT model part 2. How a mistake led BEIR I to adopt LNT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calabrese, Edward J., E-mail: edwardc@schoolph.uma

    This paper reveals that nearly 25 years after the used Russell's dose-rate data to support the adoption of the linear-no-threshold (LNT) dose response model for genetic and cancer risk assessment, Russell acknowledged a significant under-reporting of the mutation rate of the historical control group. This error, which was unknown to BEIR I, had profound implications, leading it to incorrectly adopt the LNT model, which was a decision that profoundly changed the course of risk assessment for radiation and chemicals to the present. -- Highlights: • The BEAR I Genetics Panel made an error in denying dose rate for mutation. •more » The BEIR I Genetics Subcommittee attempted to correct this dose rate error. • The control group used for risk assessment by BEIR I is now known to be in error. • Correcting this error contradicts the LNT, supporting a threshold model.« less

  17. Long-range high-speed visible light communication system over 100-m outdoor transmission utilizing receiver diversity technology

    NASA Astrophysics Data System (ADS)

    Wang, Yiguang; Huang, Xingxing; Shi, Jianyang; Wang, Yuan-quan; Chi, Nan

    2016-05-01

    Visible light communication (VLC) has no doubt become a promising candidate for future wireless communications due to the increasing trends in the usage of light-emitting diodes (LEDs). In addition to indoor high-speed wireless access and positioning applications, VLC usage in outdoor scenarios, such as vehicle networks and intelligent transportation systems, are also attracting significant interest. However, the complex outdoor environment and ambient noise are the key challenges for long-range high-speed VLC outdoor applications. To improve system performance and transmission distance, we propose to use receiver diversity technology in an outdoor VLC system. Maximal ratio combining-based receiver diversity technology is utilized in two receivers to achieve the maximal signal-to-noise ratio. A 400-Mb/s VLC transmission using a phosphor-based white LED and a 1-Gb/s wavelength division multiplexing VLC transmission using a red-green-blue LED are both successfully achieved over a 100-m outdoor distance with the bit error rate below the 7% forward error correction limit of 3.8×10-3. To the best of our knowledge, this is the highest data rate at 100-m outdoor VLC transmission ever achieved. The experimental results clearly prove the benefit and feasibility of receiver diversity technology for long-range high-speed outdoor VLC systems.

  18. A spectrally tunable solid-state source for radiometric, photometric, and colorimetric applications

    NASA Astrophysics Data System (ADS)

    Fryc, Irena; Brown, Steven W.; Eppeldauer, George P.; Ohno, Yoshihiro

    2004-10-01

    A spectrally tunable light source using a large number of LEDs and an integrating sphere has been designed and being developed at NIST. The source is designed to have a capability of producing any spectral distributions mimicking various light sources in the visible region by feedback control of individual LEDs. The output spectral irradiance or radiance of the source will be calibrated by a reference instrument, and the source will be used as a spectroradiometric as well as photometric and colorimetric standard. The use of the tunable source mimicking spectra of display colors, for example, rather than a traditional incandescent standard lamp for calibration of colorimeters, can reduce the spectral mismatch errors of the colorimeter measuring displays significantly. A series of simulations have been conducted to predict the performance of the designed tunable source when used for calibration of colorimeters. The results indicate that the errors can be reduced by an order of magnitude compared with those when the colorimeters are calibrated against Illuminant A. Stray light errors of a spectroradiometer can also be effectively reduced by using the tunable source producing a blackbody spectrum at higher temperature (e.g., 9000 K). The source can also approximate various CIE daylight illuminants and common lamp spectral distributions for other photometric and colorimetric applications.

  19. Human-computer interaction in multitask situations

    NASA Technical Reports Server (NTRS)

    Rouse, W. B.

    1977-01-01

    Human-computer interaction in multitask decisionmaking situations is considered, and it is proposed that humans and computers have overlapping responsibilities. Queueing theory is employed to model this dynamic approach to the allocation of responsibility between human and computer. Results of simulation experiments are used to illustrate the effects of several system variables including number of tasks, mean time between arrivals of action-evoking events, human-computer speed mismatch, probability of computer error, probability of human error, and the level of feedback between human and computer. Current experimental efforts are discussed and the practical issues involved in designing human-computer systems for multitask situations are considered.

  20. Response cost, reinforcement, and children's Porteus Maze qualitative performance.

    PubMed

    Neenan, D M; Routh, D K

    1986-09-01

    Sixty fourth-grade children were given two different series of the Porteus Maze Test. The first series was given as a baseline, and the second series was administered under one of four different experimental conditions: control, response cost, positive reinforcement, or negative verbal feedback. Response cost and positive reinforcement, but not negative verbal feedback, led to significant decreases in the number of all types of qualitative errors in relation to the control group. The reduction of nontargeted as well as targeted errors provides evidence for the generalized effects of response cost and positive reinforcement.

  1. LED Illuminators for the SNAP Calibration

    NASA Astrophysics Data System (ADS)

    Misra, Amit; Baptista, B.; Mufson, S.; Mostek, N.

    2007-12-01

    The Supernova Acceleration Probe, or SNAP, is a proposed satellite mission that will study dark energy to better understand what is driving the universe's accelerated expansion. One of the goals of SNAP is to control systematic color uncertainties to less than 2%. The work described here is directed at the development of a flight calibration illumination system for SNAP that minimizes systematic errors in color. The system is based on LEDs as the illumination lamps. LEDs are compact, long-lived, and low power illuminators, which make them attractive for space missions lasting several years. This poster discusses optical measurements of pulsed, thermally controlled LEDs obtained from commercial vendors. Measurements over short (over the span of one day) and long (over the span of weeks) time scales have shown that the irradiance of the LEDs we tested is constant at the 0.3% level. In these measurements we paid particular attention to the influence of junction heating. Measurements of LED irradiance versus the duty cycle of the pulsed LED show that in general the LED irradiance increases as the junction temperature increases. Additionally, the FWHM of the spectrum also increases as the temperature increases. However, measurements of LED irradiance versus temperature as regulated a by a thermal controller circuit, show that the LED irradiance decreases as the temperature increases. This work has been supported by the National Science Foundation under grant AST-0452975 (REU-Site to Indiana U.).

  2. Hierarchical scheme for detecting the rotating MIMO transmission of the in-door RGB-LED visible light wireless communications using mobile-phone camera

    NASA Astrophysics Data System (ADS)

    Chen, Shih-Hao; Chow, Chi-Wai

    2015-01-01

    Multiple-input and multiple-output (MIMO) scheme can extend the transmission capacity for the light-emitting-diode (LED) based visible light communication (VLC) systems. The MIMO VLC system that uses the mobile-phone camera as the optical receiver (Rx) to receive MIMO signal from the n×n Red-Green-Blue (RGB) LED array is desirable. The key step of decoding this signal is to detect the signal direction. If the LED transmitter (Tx) is rotated, the Rx may not realize the rotation and transmission error can occur. In this work, we propose and demonstrate a novel hierarchical transmission scheme which can reduce the computation complexity of rotation detection in LED array VLC system. We use the n×n RGB LED array as the MIMO Tx. In our study, a novel two dimensional Hadamard coding scheme is proposed. Using the different LED color layers to indicate the rotation, a low complexity rotation detection method can be used for improving the quality of received signal. The detection correction rate is above 95% in the indoor usage distance. Experimental results confirm the feasibility of the proposed scheme.

  3. Combining task analysis and fault tree analysis for accident and incident analysis: a case study from Bulgaria.

    PubMed

    Doytchev, Doytchin E; Szwillus, Gerd

    2009-11-01

    Understanding the reasons for incident and accident occurrence is important for an organization's safety. Different methods have been developed to achieve this goal. To better understand the human behaviour in incident occurrence we propose an analysis concept that combines Fault Tree Analysis (FTA) and Task Analysis (TA). The former method identifies the root causes of an accident/incident, while the latter analyses the way people perform the tasks in their work environment and how they interact with machines or colleagues. These methods were complemented with the use of the Human Error Identification in System Tools (HEIST) methodology and the concept of Performance Shaping Factors (PSF) to deepen the insight into the error modes of an operator's behaviour. HEIST shows the external error modes that caused the human error and the factors that prompted the human to err. To show the validity of the approach, a case study at a Bulgarian Hydro power plant was carried out. An incident - the flooding of the plant's basement - was analysed by combining the afore-mentioned methods. The case study shows that Task Analysis in combination with other methods can be applied successfully to human error analysis, revealing details about erroneous actions in a realistic situation.

  4. Object permanence in adult common marmosets (Callithrix jacchus): not everything is an "A-not-B" error that seems to be one.

    PubMed

    Kis, Anna; Gácsi, Márta; Range, Friederike; Virányi, Zsófia

    2012-01-01

    In this paper, we describe a behaviour pattern similar to the "A-not-B" error found in human infants and young apes in a monkey species, the common marmosets (Callithrix jacchus). In contrast to the classical explanation, recently it has been suggested that the "A-not-B" error committed by human infants is at least partially due to misinterpretation of the hider's ostensively communicated object hiding actions as potential 'teaching' demonstrations during the A trials. We tested whether this so-called Natural Pedagogy hypothesis would account for the A-not-B error that marmosets commit in a standard object permanence task, but found no support for the hypothesis in this species. Alternatively, we present evidence that lower level mechanisms, such as attention and motivation, play an important role in committing the "A-not-B" error in marmosets. We argue that these simple mechanisms might contribute to the effect of undeveloped object representational skills in other species including young non-human primates that commit the A-not-B error.

  5. Analysis of light emitting diode array lighting system based on human vision: normal and abnormal uniformity condition.

    PubMed

    Qin, Zong; Ji, Chuangang; Wang, Kai; Liu, Sheng

    2012-10-08

    In this paper, condition for uniform lighting generated by light emitting diode (LED) array was systematically studied. To take human vision effect into consideration, contrast sensitivity function (CSF) was novelly adopted as critical criterion for uniform lighting instead of conventionally used Sparrow's Criterion (SC). Through CSF method, design parameters including system thickness, LED pitch, LED's spatial radiation distribution and viewing condition can be analytically combined. In a specific LED array lighting system (LALS) with foursquare LED arrangement, different types of LEDs (Lambertian and Batwing type) and given viewing condition, optimum system thicknesses and LED pitches were calculated and compared with those got through SC method. Results show that CSF method can achieve more appropriate optimum parameters than SC method. Additionally, an abnormal phenomenon that uniformity varies with structural parameters non-monotonically in LALS with non-Lambertian LEDs was found and analyzed. Based on the analysis, a design method of LALS that can bring about better practicability, lower cost and more attractive appearance was summarized.

  6. Enhanced ICBM Diffusion Tensor Template of the Human Brain

    PubMed Central

    Zhang, Shengwei; Peng, Huiling; Dawe, Robert J.; Arfanakis, Konstantinos

    2010-01-01

    Development of a diffusion tensor (DT) template that is representative of the micro-architecture of the human brain is crucial for comparisons of neuronal structural integrity and brain connectivity across populations, as well as for the generation of a detailed white matter atlas. Furthermore, a DT template in ICBM space may simplify consolidation of information from DT, anatomical and functional MRI studies. The previously developed “IIT DT brain template” was produced in ICBM-152 space, based on a large number of subjects from a limited age-range, using data with minimal image artifacts, and non-linear registration. That template was characterized by higher image sharpness, provided the ability to distinguish smaller white matter fiber structures, and contained fewer image artifacts, than several previously published DT templates. However, low-dimensional registration was used in the development of that template, which led to a mismatch of DT information across subjects, eventually manifested as loss of local diffusion information and errors in the final tensors. Also, low-dimensional registration led to a mismatch of the anatomy in the IIT and ICBM-152 templates. In this work, a significantly improved DT brain template in ICBM-152 space was developed, using high-dimensional non-linear registration and the raw data collected for the purposes of the IIT template. The accuracy of inter-subject DT matching was significantly increased compared to that achieved for the development of the IIT template. Consequently, the new template contained DT information that was more representative of single-subject human brain data, and was characterized by higher image sharpness than the IIT template. Furthermore, a bootstrap approach demonstrated that the variance of tensor characteristics was lower in the new template. Additionally, compared to the IIT template, brain anatomy in the new template more accurately matched ICBM-152 space. Finally, spatial normalization of a number of DT datasets through registration to the new and existing IIT templates was improved when using the new template. PMID:20851772

  7. An optrode for photometric detection of ammonia in air

    NASA Astrophysics Data System (ADS)

    Buzanovskii, V. A.

    2017-10-01

    A scheme of constructing an LED optrode for photometric detection of ammonia in air is considered. The components of the device are (1) a glass plate coated with a film of polydimethylsiloxane with an ion-coupled cation of brilliant-green dye, (2) an LED emitting at a wavelength of 655 nm, and (3) a metal housing. The nominal static conversion function, sensitivity, and relative measurement error of the device are analyzed on the basis of mathematical modeling. The obtained results allow one to design an LED optrode capable of carrying out control for automated technological processes, solving problems in the area of security, etc. The device provides the ability to create photometric gas analyzers of ammonia with small overall dimensions, power consumption, and cost.

  8. Updating expected action outcome in the medial frontal cortex involves an evaluation of error type.

    PubMed

    Maier, Martin E; Steinhauser, Marco

    2013-10-02

    Forming expectations about the outcome of an action is an important prerequisite for action control and reinforcement learning in the human brain. The medial frontal cortex (MFC) has been shown to play an important role in the representation of outcome expectations, particularly when an update of expected outcome becomes necessary because an error is detected. However, error detection alone is not always sufficient to compute expected outcome because errors can occur in various ways and different types of errors may be associated with different outcomes. In the present study, we therefore investigate whether updating expected outcome in the human MFC is based on an evaluation of error type. Our approach was to consider an electrophysiological correlate of MFC activity on errors, the error-related negativity (Ne/ERN), in a task in which two types of errors could occur. Because the two error types were associated with different amounts of monetary loss, updating expected outcomes on error trials required an evaluation of error type. Our data revealed a pattern of Ne/ERN amplitudes that closely mirrored the amount of monetary loss associated with each error type, suggesting that outcome expectations are updated based on an evaluation of error type. We propose that this is achieved by a proactive evaluation process that anticipates error types by continuously monitoring error sources or by dynamically representing possible response-outcome relations.

  9. Human-Friendly Light-Emitting Diode Source Stimulates Broiler Growth.

    PubMed

    Pan, Jinming; Yang, Yefeng; Yang, Bo; Dai, Wenhua; Yu, Yonghua

    2015-01-01

    Previous study and our laboratory have reported that short-wavelength (blue and green) light and combination stimulate broiler growth. However, short-wavelength stimuli could have negative effects on poultry husbandry workers. The present study was conducted to evaluate the effects of human-friendly yellow LED light, which is acceptable to humans and close to green light, on broiler growth. We also aimed to investigate the potential quantitative relationship between the wavelengths of light used for artificial illumination and growth parameters in broilers. After hatching, 360 female chicks ("Meihuang" were evenly divided into six lighting treatment groups: white LED strips (400-700 nm, WL); red LED strips (620 nm, RL); yellow LED strips (580 nm, YL); green LED strips (514 nm, GL); blue LED strips (455 nm, BL); and fluorescent strips (400-700 nm, FL). From 30 to 72 days of age, broilers reared under YL and GL were heavier than broilers treated with FL (P < 0.05). Broilers reared under YL obtained the similar growth parameters with the broilers reared under GL and BL (P > 0.05). Moreover, YL significantly improved feeding efficiency when compared with GL and BL at 45 and 60 days of age (P < 0.05). In addition, we found an age-dependent effect of light spectra on broiler growth and a quantitative relationship between LED light spectra (455 to 620 nm) and the live body weights of broilers. The wavelength of light (455 to 620 nm) was found to be negatively related (R2 = 0.876) to live body weight at an early stage of development, whereas the wavelength of light (455 to 620 nm) was found to be positively correlated with live body weight (R2 = 0.925) in older chickens. Our results demonstrated that human-friendly yellow LED light (YL), which is friendly to the human, can be applied to the broilers production.

  10. Human-Friendly Light-Emitting Diode Source Stimulates Broiler Growth

    PubMed Central

    Yang, Bo; Dai, Wenhua; Yu, Yonghua

    2015-01-01

    Previous study and our laboratory have reported that short-wavelength (blue and green) light and combination stimulate broiler growth. However, short-wavelength stimuli could have negative effects on poultry husbandry workers. The present study was conducted to evaluate the effects of human-friendly yellow LED light, which is acceptable to humans and close to green light, on broiler growth. We also aimed to investigate the potential quantitative relationship between the wavelengths of light used for artificial illumination and growth parameters in broilers. After hatching, 360 female chicks (“Meihuang” were evenly divided into six lighting treatment groups: white LED strips (400–700 nm, WL); red LED strips (620 nm, RL); yellow LED strips (580 nm, YL); green LED strips (514 nm, GL); blue LED strips (455 nm, BL); and fluorescent strips (400–700 nm, FL). From 30 to 72 days of age, broilers reared under YL and GL were heavier than broilers treated with FL (P < 0.05). Broilers reared under YL obtained the similar growth parameters with the broilers reared under GL and BL (P > 0.05). Moreover, YL significantly improved feeding efficiency when compared with GL and BL at 45 and 60 days of age (P < 0.05). In addition, we found an age-dependent effect of light spectra on broiler growth and a quantitative relationship between LED light spectra (455 to 620 nm) and the live body weights of broilers. The wavelength of light (455 to 620 nm) was found to be negatively related (R 2 = 0.876) to live body weight at an early stage of development, whereas the wavelength of light (455 to 620 nm) was found to be positively correlated with live body weight (R 2 = 0.925) in older chickens. Our results demonstrated that human-friendly yellow LED light (YL), which is friendly to the human, can be applied to the broilers production. PMID:26270988

  11. Accounting for measurement error in human life history trade-offs using structural equation modeling.

    PubMed

    Helle, Samuli

    2018-03-01

    Revealing causal effects from correlative data is very challenging and a contemporary problem in human life history research owing to the lack of experimental approach. Problems with causal inference arising from measurement error in independent variables, whether related either to inaccurate measurement technique or validity of measurements, seem not well-known in this field. The aim of this study is to show how structural equation modeling (SEM) with latent variables can be applied to account for measurement error in independent variables when the researcher has recorded several indicators of a hypothesized latent construct. As a simple example of this approach, measurement error in lifetime allocation of resources to reproduction in Finnish preindustrial women is modelled in the context of the survival cost of reproduction. In humans, lifetime energetic resources allocated in reproduction are almost impossible to quantify with precision and, thus, typically used measures of lifetime reproductive effort (e.g., lifetime reproductive success and parity) are likely to be plagued by measurement error. These results are contrasted with those obtained from a traditional regression approach where the single best proxy of lifetime reproductive effort available in the data is used for inference. As expected, the inability to account for measurement error in women's lifetime reproductive effort resulted in the underestimation of its underlying effect size on post-reproductive survival. This article emphasizes the advantages that the SEM framework can provide in handling measurement error via multiple-indicator latent variables in human life history studies. © 2017 Wiley Periodicals, Inc.

  12. Reliability of drivers in urban intersections.

    PubMed

    Gstalter, Herbert; Fastenmeier, Wolfgang

    2010-01-01

    The concept of human reliability has been widely used in industrial settings by human factors experts to optimise the person-task fit. Reliability is estimated by the probability that a task will successfully be completed by personnel in a given stage of system operation. Human Reliability Analysis (HRA) is a technique used to calculate human error probabilities as the ratio of errors committed to the number of opportunities for that error. To transfer this notion to the measurement of car driver reliability the following components are necessary: a taxonomy of driving tasks, a definition of correct behaviour in each of these tasks, a list of errors as deviations from the correct actions and an adequate observation method to register errors and opportunities for these errors. Use of the SAFE-task analysis procedure recently made it possible to derive driver errors directly from the normative analysis of behavioural requirements. Driver reliability estimates could be used to compare groups of tasks (e.g. different types of intersections with their respective regulations) as well as groups of drivers' or individual drivers' aptitudes. This approach was tested in a field study with 62 drivers of different age groups. The subjects drove an instrumented car and had to complete an urban test route, the main features of which were 18 intersections representing six different driving tasks. The subjects were accompanied by two trained observers who recorded driver errors using standardized observation sheets. Results indicate that error indices often vary between both the age group of drivers and the type of driving task. The highest error indices occurred in the non-signalised intersection tasks and the roundabout, which exactly equals the corresponding ratings of task complexity from the SAFE analysis. A comparison of age groups clearly shows the disadvantage of older drivers, whose error indices in nearly all tasks are significantly higher than those of the other groups. The vast majority of these errors could be explained by high task load in the intersections, as they represent difficult tasks. The discussion shows how reliability estimates can be used in a constructive way to propose changes in car design, intersection layout and regulation as well as driver training.

  13. Safety coaches in radiology: decreasing human error and minimizing patient harm.

    PubMed

    Dickerson, Julie M; Koch, Bernadette L; Adams, Janet M; Goodfriend, Martha A; Donnelly, Lane F

    2010-09-01

    Successful programs to improve patient safety require a component aimed at improving safety culture and environment, resulting in a reduced number of human errors that could lead to patient harm. Safety coaching provides peer accountability. It involves observing for safety behaviors and use of error prevention techniques and provides immediate feedback. For more than a decade, behavior-based safety coaching has been a successful strategy for reducing error within the context of occupational safety in industry. We describe the use of safety coaches in radiology. Safety coaches are an important component of our comprehensive patient safety program.

  14. Error-associated behaviors and error rates for robotic geology

    NASA Technical Reports Server (NTRS)

    Anderson, Robert C.; Thomas, Geb; Wagner, Jacob; Glasgow, Justin

    2004-01-01

    This study explores human error as a function of the decision-making process. One of many models for human decision-making is Rasmussen's decision ladder [9]. The decision ladder identifies the multiple tasks and states of knowledge involved in decision-making. The tasks and states of knowledge can be classified by the level of cognitive effort required to make the decision, leading to the skill, rule, and knowledge taxonomy (Rasmussen, 1987). Skill based decisions require the least cognitive effort and knowledge based decisions require the greatest cognitive effort. Errors can occur at any of the cognitive levels.

  15. Procedural error monitoring and smart checklists

    NASA Technical Reports Server (NTRS)

    Palmer, Everett

    1990-01-01

    Human beings make and usually detect errors routinely. The same mental processes that allow humans to cope with novel problems can also lead to error. Bill Rouse has argued that errors are not inherently bad but their consequences may be. He proposes the development of error-tolerant systems that detect errors and take steps to prevent the consequences of the error from occurring. Research should be done on self and automatic detection of random and unanticipated errors. For self detection, displays should be developed that make the consequences of errors immediately apparent. For example, electronic map displays graphically show the consequences of horizontal flight plan entry errors. Vertical profile displays should be developed to make apparent vertical flight planning errors. Other concepts such as energy circles could also help the crew detect gross flight planning errors. For automatic detection, systems should be developed that can track pilot activity, infer pilot intent and inform the crew of potential errors before their consequences are realized. Systems that perform a reasonableness check on flight plan modifications by checking route length and magnitude of course changes are simple examples. Another example would be a system that checked the aircraft's planned altitude against a data base of world terrain elevations. Information is given in viewgraph form.

  16. A circadian rhythm in skill-based errors in aviation maintenance.

    PubMed

    Hobbs, Alan; Williamson, Ann; Van Dongen, Hans P A

    2010-07-01

    In workplaces where activity continues around the clock, human error has been observed to exhibit a circadian rhythm, with a characteristic peak in the early hours of the morning. Errors are commonly distinguished by the nature of the underlying cognitive failure, particularly the level of intentionality involved in the erroneous action. The Skill-Rule-Knowledge (SRK) framework of Rasmussen is used widely in the study of industrial errors and accidents. The SRK framework describes three fundamental types of error, according to whether behavior is under the control of practiced sensori-motor skill routines with minimal conscious awareness; is guided by implicit or explicit rules or expertise; or where the planning of actions requires the conscious application of domain knowledge. Up to now, examinations of circadian patterns of industrial errors have not distinguished between different types of error. Consequently, it is not clear whether all types of error exhibit the same circadian rhythm. A survey was distributed to aircraft maintenance personnel in Australia. Personnel were invited to anonymously report a safety incident and were prompted to describe, in detail, the human involvement (if any) that contributed to it. A total of 402 airline maintenance personnel reported an incident, providing 369 descriptions of human error in which the time of the incident was reported and sufficient detail was available to analyze the error. Errors were categorized using a modified version of the SRK framework, in which errors are categorized as skill-based, rule-based, or knowledge-based, or as procedure violations. An independent check confirmed that the SRK framework had been applied with sufficient consistency and reliability. Skill-based errors were the most common form of error, followed by procedure violations, rule-based errors, and knowledge-based errors. The frequency of errors was adjusted for the estimated proportion of workers present at work/each hour of the day, and the 24 h pattern of each error type was examined. Skill-based errors exhibited a significant circadian rhythm, being most prevalent in the early hours of the morning. Variation in the frequency of rule-based errors, knowledge-based errors, and procedure violations over the 24 h did not reach statistical significance. The results suggest that during the early hours of the morning, maintenance technicians are at heightened risk of "absent minded" errors involving failures to execute action plans as intended.

  17. Do errors matter? Errorless and errorful learning in anomic picture naming.

    PubMed

    McKissock, Stephen; Ward, Jamie

    2007-06-01

    Errorless training methods significantly improve learning in memory-impaired patients relative to errorful training procedures. However, the validity of this technique for acquiring linguistic information in aphasia has rarely been studied. This study contrasts three different treatment conditions over an 8 week period for rehabilitating picture naming in anomia: (1) errorless learning in which pictures are shown and the experimenter provides the name, (2) errorful learning with feedback in which the patient is required to generate a name but the correct name is then supplied by the experimenter, and (3) errorful learning in which no feedback is given. These conditions are compared to an untreated set of matched words. Both errorless and errorful learning with feedback conditions led to significant improvement at a 2-week and 12-14-week retest (errorful without feedback and untreated words were similar). The results suggest that it does not matter whether anomic patients are allowed to make errors in picture naming or not (unlike in memory impaired individuals). What does matter is that a correct response is given as feedback. The results also question the widely held assumption that it is beneficial for a patient to attempt to retrieve a word, given that our errorless condition involved no retrieval effort and had the greatest benefits.

  18. Can a two-hour lecture by a pharmacist improve the quality of prescriptions in a pediatric hospital? A retrospective cohort study.

    PubMed

    Vairy, Stephanie; Corny, Jennifer; Jamoulle, Olivier; Levy, Arielle; Lebel, Denis; Carceller, Ana

    2017-12-01

    A high rate of prescription errors exists in pediatric teaching hospitals, especially during initial training. To determine the effectiveness of a two-hour lecture by a pharmacist on rates of prescription errors and quality of prescriptions. A two-hour lecture led by a pharmacist was provided to 11 junior pediatric residents (PGY-1) as part of a one-month immersion program. A control group included 15 residents without the intervention. We reviewed charts to analyze the first 50 prescriptions of each resident. Data were collected from 1300 prescriptions involving 451 patients, 550 in the intervention group and 750 in the control group. The rate of prescription errors in the intervention group was 9.6% compared to 11.3% in the control group (p=0.32), affecting 106 patients. Statistically significant differences between both groups were prescriptions with unwritten doses (p=0.01) and errors involving overdosing (p=0.04). We identified many errors as well as issues surrounding quality of prescriptions. We found a 10.6% prescription error rate. This two-hour lecture seems insufficient to reduce prescription errors among junior pediatric residents. This study highlights the most frequent types of errors and prescription quality issues that should be targeted by future educational interventions.

  19. Exploring the characteristics, global distribution and reasons for retraction of published articles involving human research participants: a literature survey.

    PubMed

    Li, Guowei; Kamel, Mariam; Jin, Yanling; Xu, Michael Kuan; Mbuagbaw, Lawrence; Samaan, Zainab; Levine, Mitchell Ah; Thabane, Lehana

    2018-01-01

    Article retraction is a measure taken by journals or authors where there is evidence of research misconduct or error, redundancy, plagiarism or unethical research. Recently, the retraction of scientific publications has been on the rise. In this survey, we aimed to describe the characteristics and distribution of retracted articles and the reasons for retractions. We searched retracted articles on the PubMed database and Retraction Watch website from 1980 to February 2016. The primary outcomes were the characteristics and distribution of retracted articles and the reasons for retractions. The secondary outcomes included how article retractions were handled by journals and how to improve the journal practices toward article retractions. We included 1,339 retracted articles. Most retracted articles had six authors or fewer. Article retraction was most common in the USA (26%), Japan (11%) and Germany (10%). The main reasons for article retraction were misconduct (51%, n = 685) and error (14%, n = 193). There were 66% (n = 889) of retracted articles having male senior or corresponding authors. Of the articles retracted after August 2010, 63% (n = 567) retractions were reported on Retraction Watch. Large discrepancies were observed in the ways that different journals handled article retractions. For instance, articles were completely withdrawn from some journals, while in others, articles were still available with no indication of retraction. Likewise, some retraction notices included a detailed account of the events that led to article retraction, while others only consisted of a statement indicating the article retraction. The characteristics, geographic distribution and reasons for retraction of published articles involving human research participants were examined in this survey. More efforts are needed to improve the consistency and transparency of journal practices toward article retractions.

  20. Exploring the characteristics, global distribution and reasons for retraction of published articles involving human research participants: a literature survey

    PubMed Central

    Li, Guowei; Kamel, Mariam; Jin, Yanling; Xu, Michael Kuan; Mbuagbaw, Lawrence; Samaan, Zainab; Levine, Mitchell AH; Thabane, Lehana

    2018-01-01

    Aim Article retraction is a measure taken by journals or authors where there is evidence of research misconduct or error, redundancy, plagiarism or unethical research. Recently, the retraction of scientific publications has been on the rise. In this survey, we aimed to describe the characteristics and distribution of retracted articles and the reasons for retractions. Methods We searched retracted articles on the PubMed database and Retraction Watch website from 1980 to February 2016. The primary outcomes were the characteristics and distribution of retracted articles and the reasons for retractions. The secondary outcomes included how article retractions were handled by journals and how to improve the journal practices toward article retractions. Results We included 1,339 retracted articles. Most retracted articles had six authors or fewer. Article retraction was most common in the USA (26%), Japan (11%) and Germany (10%). The main reasons for article retraction were misconduct (51%, n = 685) and error (14%, n = 193). There were 66% (n = 889) of retracted articles having male senior or corresponding authors. Of the articles retracted after August 2010, 63% (n = 567) retractions were reported on Retraction Watch. Large discrepancies were observed in the ways that different journals handled article retractions. For instance, articles were completely withdrawn from some journals, while in others, articles were still available with no indication of retraction. Likewise, some retraction notices included a detailed account of the events that led to article retraction, while others only consisted of a statement indicating the article retraction. Conclusion The characteristics, geographic distribution and reasons for retraction of published articles involving human research participants were examined in this survey. More efforts are needed to improve the consistency and transparency of journal practices toward article retractions. PMID:29403283

  1. Spatial durbin error model for human development index in Province of Central Java.

    NASA Astrophysics Data System (ADS)

    Septiawan, A. R.; Handajani, S. S.; Martini, T. S.

    2018-05-01

    The Human Development Index (HDI) is an indicator used to measure success in building the quality of human life, explaining how people access development outcomes when earning income, health and education. Every year HDI in Central Java has improved to a better direction. In 2016, HDI in Central Java was 69.98 %, an increase of 0.49 % over the previous year. The objective of this study was to apply the spatial Durbin error model using angle weights queen contiguity to measure HDI in Central Java Province. Spatial Durbin error model is used because the model overcomes the spatial effect of errors and the effects of spatial depedency on the independent variable. Factors there use is life expectancy, mean years of schooling, expected years of schooling, and purchasing power parity. Based on the result of research, we get spatial Durbin error model for HDI in Central Java with influencing factors are life expectancy, mean years of schooling, expected years of schooling, and purchasing power parity.

  2. Resistance to extreme strategies, rather than prosocial preferences, can explain human cooperation in public goods games.

    PubMed

    Kümmerli, Rolf; Burton-Chellew, Maxwell N; Ross-Gillespie, Adin; West, Stuart A

    2010-06-01

    The results of numerous economic games suggest that humans behave more cooperatively than would be expected if they were maximizing selfish interests. It has been argued that this is because individuals gain satisfaction from the success of others, and that such prosocial preferences require a novel evolutionary explanation. However, in previous games, imperfect behavior would automatically lead to an increase in cooperation, making it impossible to decouple any form of mistake or error from prosocial cooperative decisions. Here we empirically test between these alternatives by decoupling imperfect behavior from prosocial preferences in modified versions of the public goods game, in which individuals would maximize their selfish gain by completely (100%) cooperating. We found that, although this led to higher levels of cooperation, it did not lead to full cooperation, and individuals still perceived their group mates as competitors. This is inconsistent with either selfish or prosocial preferences, suggesting that the most parsimonious explanation is imperfect behavior triggered by psychological drives that can prevent both complete defection and complete cooperation. More generally, our results illustrate the caution that must be exercised when interpreting the evolutionary implications of economic experiments, especially the absolute level of cooperation in a particular treatment.

  3. The "hospital central laboratory": automation, integration and clinical usefulness.

    PubMed

    Zaninotto, Martina; Plebani, Mario

    2010-07-01

    Recent technological developments in laboratory medicine have led to a major challenge, maintaining a close connection between the search of efficiency through automation and consolidation and the assurance of effectiveness. The adoption of systems that automate most of the manual tasks characterizing routine activities has significantly improved the quality of laboratory performance; total laboratory automation being the paradigm of the idea that "human-less" robotic laboratories may allow for better operation and insuring less human errors. Furthermore, even if ongoing technological developments have considerably improved the productivity of clinical laboratories as well as reducing the turnaround time of the entire process, the value of qualified personnel remains a significant issue. Recent evidence confirms that automation allows clinical laboratories to improve analytical performances only if trained staff operate in accordance with well-defined standard operative procedures, thus assuring continuous monitoring of the analytical quality. In addition, laboratory automation may improve the appropriateness of test requests through the use of algorithms and reflex testing. This should allow the adoption of clinical and biochemical guidelines. In conclusion, in laboratory medicine, technology represents a tool for improving clinical effectiveness and patient outcomes, but it has to be managed by qualified laboratory professionals.

  4. Hierarchical learning induces two simultaneous, but separable, prediction errors in human basal ganglia.

    PubMed

    Diuk, Carlos; Tsai, Karin; Wallis, Jonathan; Botvinick, Matthew; Niv, Yael

    2013-03-27

    Studies suggest that dopaminergic neurons report a unitary, global reward prediction error signal. However, learning in complex real-life tasks, in particular tasks that show hierarchical structure, requires multiple prediction errors that may coincide in time. We used functional neuroimaging to measure prediction error signals in humans performing such a hierarchical task involving simultaneous, uncorrelated prediction errors. Analysis of signals in a priori anatomical regions of interest in the ventral striatum and the ventral tegmental area indeed evidenced two simultaneous, but separable, prediction error signals corresponding to the two levels of hierarchy in the task. This result suggests that suitably designed tasks may reveal a more intricate pattern of firing in dopaminergic neurons. Moreover, the need for downstream separation of these signals implies possible limitations on the number of different task levels that we can learn about simultaneously.

  5. Types of Lamp for Homework and Myopia among Chinese School-Aged Children.

    PubMed

    Pan, Chen-Wei; Wu, Rong-Kun; Liu, Hu; Li, Jun; Zhong, Hua

    2018-06-01

    We aim to determine the association of the types of lamp for homework including incandescent lamp, fluorescent lamp, and light-emitting diode (LED) lamp with the prevalence of myopia in Chinese children. 2346 grade 7 students from ten middle schools (93.5% response rate) aged 13 to 14 years in Mojiang, a small county located in Southwestern China, participated in the study. Refractive error was measured with cycloplegia using an autorefractor by optometrists or trained technicians. An IOL Master was used to measure ocular biometric parameters including axial length (AL). Information regarding the types of lamp for homework af``ter schools was collected by questionnaires. Of all the study participants, 693 (29.5%) were affected by myopia, with the prevalence estimates being higher in girls (36.8%; 95% confidence interval [CI]: 34.0, 39.6) than in boys (22.8%; 95% CI: 20.4, 25.1) (P < 0.001). After adjusting for potential confounders such as gender, height, parental history of myopia, time on computer use, time on watching TV, time outdoors, and time on reading and writing, participants using LED lamps for homework had a more myopic refractive error and a longer AL compared with those using incandescent or fluorescent lamps. There were no significant differences in myopia prevalence between children using incandescent and fluorescent lamps for homework. The population attributable risk percentage for myopia associated with using LED lamps for homework after schools was 11.2%. Using LED lamps for homework after schools might contribute to the development of myopia among school-aged children.

  6. AOT Retrieval Procedure for Distributed Measurements With Low-Cost Sun Photometers

    NASA Astrophysics Data System (ADS)

    Toledo, F.; Garrido, C.; Díaz, M.; Rondanelli, R.; Jorquera, S.; Valdivieso, P.

    2018-01-01

    We propose a new application of inexpensive light-emitting diode (LED)-based Sun photometers, consisting of measuring the aerosol optical thickness (AOT) with high resolution within metropolitan scales. Previously, these instruments have been used at continental scales by the GLOBE program, but this extension is already covered by more expensive and higher-precision instruments of the AERONET global network. For this we built an open source two-channeled LED-based Sun photometer based on previous developments, with improvements in the hardware, software, and modifications on the calibration procedure. Among these we highlight the use of MODTRAN to characterize the effect introduced by using LED sensors in the AOT retrieval, an open design available for the scientific community and a calibration procedure that takes advantage of a CIMEL Sun photometer located within the city, enables the intercomparison of several LED Sun photometers with a common reference. We estimated the root-mean-square error in the AOT retrieved by the prototypes as 0.006 at the 564 nm and 0.009 at the 408 nm. This error is way under the magnitude of the AOT daily cycle variability measured by us in our campaigns, even for distances closer than 15 km. In addition to inner city campaigns, we also show aerosol-tracing applications by measuring AOT variations from the city of Santiago to the Andes glaciers. Measuring AOT at high spatial resolution in urban areas can improve our understanding of urban scale aerosol circulation, providing information for solar energy planning, health policies, and climatological studies, among others.

  7. Mental representation of symbols as revealed by vocabulary errors in two bonobos (Pan paniscus).

    PubMed

    Lyn, Heidi

    2007-10-01

    Error analysis has been used in humans to detect implicit representations and categories in language use. The present study utilizes the same technique to report on mental representations and categories in symbol use from two bonobos (Pan paniscus). These bonobos have been shown in published reports to comprehend English at the level of a two-and-a-half year old child and to use a keyboard with over 200 visuographic symbols (lexigrams). In this study, vocabulary test errors from over 10 years of data revealed auditory, visual, and spatio-temporal generalizations (errors were more likely items that looked like sounded like, or were frequently associated with the sample item in space or in time), as well as hierarchical and conceptual categorizations. These error data, like those of humans, are a result of spontaneous responding rather than specific training and do not solely depend upon the sample mode (e.g. auditory similarity errors are not universally more frequent with an English sample, nor were visual similarity errors universally more frequent with a photograph sample). However, unlike humans, these bonobos do not make errors based on syntactical confusions (e.g. confusing semantically unrelated nouns), suggesting that they may not separate syntactical and semantic information. These data suggest that apes spontaneously create a complex, hierarchical, web of representations when exposed to a symbol system.

  8. 78 FR 11237 - Public Hearing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-15

    ... management of human error in its operations and system safety programs, and the status of PTC implementation... UP's safety management policies and programs associated with human error, operational accident and... Chairman of the Board of Inquiry 2. Introduction of the Board of Inquiry and Technical Panel 3...

  9. Poster - 49: Assessment of Synchrony respiratory compensation error for CyberKnife liver treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Ming; Cygler,

    The goal of this work is to quantify respiratory motion compensation errors for liver tumor patients treated by the CyberKnife system with Synchrony tracking, to identify patients with the smallest tracking errors and to eventually help coach patient’s breathing patterns to minimize dose delivery errors. The accuracy of CyberKnife Synchrony respiratory motion compensation was assessed for 37 patients treated for liver lesions by analyzing data from system logfiles. A predictive model is used to modulate the direction of individual beams during dose delivery based on the positions of internally implanted fiducials determined using an orthogonal x-ray imaging system and themore » current location of LED external markers. For each x-ray pair acquired, system logfiles report the prediction error, the difference between the measured and predicted fiducial positions, and the delivery error, which is an estimate of the statistical error in the model overcoming the latency between x-ray acquisition and robotic repositioning. The total error was calculated at the time of each x-ray pair, for the number of treatment fractions and the number of patients, giving the average respiratory motion compensation error in three dimensions. The 99{sup th} percentile for the total radial error is 3.85 mm, with the highest contribution of 2.79 mm in superior/inferior (S/I) direction. The absolute mean compensation error is 1.78 mm radially with a 1.27 mm contribution in the S/I direction. Regions of high total error may provide insight into features predicting groups of patients with larger or smaller total errors.« less

  10. A Slowed Cell Cycle Stabilizes the Budding Yeast Genome.

    PubMed

    Vinton, Peter J; Weinert, Ted

    2017-06-01

    During cell division, aberrant DNA structures are detected by regulators called checkpoints that slow division to allow error correction. In addition to checkpoint-induced delay, it is widely assumed, though rarely shown, that merely slowing the cell cycle might allow more time for error detection and correction, thus resulting in a more stable genome. Fidelity by a slowed cell cycle might be independent of checkpoints. Here we tested the hypothesis that a slowed cell cycle stabilizes the genome, independent of checkpoints, in the budding yeast Saccharomyces cerevisiae We were led to this hypothesis when we identified a gene ( ERV14 , an ER cargo membrane protein) that when mutated, unexpectedly stabilized the genome, as measured by three different chromosome assays. After extensive studies of pathways rendered dysfunctional in erv14 mutant cells, we are led to the inference that no particular pathway is involved in stabilization, but rather the slowed cell cycle induced by erv14 stabilized the genome. We then demonstrated that, in genetic mutations and chemical treatments unrelated to ERV14 , a slowed cell cycle indeed correlates with a more stable genome, even in checkpoint-proficient cells. Data suggest a delay in G2/M may commonly stabilize the genome. We conclude that chromosome errors are more rarely made or are more readily corrected when the cell cycle is slowed (even ∼15 min longer in an ∼100-min cell cycle). And, some chromosome errors may not signal checkpoint-mediated responses, or do not sufficiently signal to allow correction, and their correction benefits from this "time checkpoint." Copyright © 2017 by the Genetics Society of America.

  11. Oxygen monitor for semi-closed rebreathers: design and use for estimating metabolic oxygen consumption

    NASA Astrophysics Data System (ADS)

    Clarke, John R.; Southerland, David

    1999-07-01

    Semi-closed circuit underwater breathing apparatus (UBA) provide a constant flow of mixed gas containing oxygen and nitrogen or helium to a diver. However, as a diver's work rate and metabolic oxygen consumption varies, the oxygen percentages within the UBA can change dramatically. Hence, even a resting diver can become hypoxic and become at risk for oxygen induced seizures. Conversely, a hard working diver can become hypoxic and lose consciousness. Unfortunately, current semi-closed UBA do not contain oxygen monitors. We describe a simple oxygen monitoring system designed and prototyped at the Navy Experimental Diving Unit. The main monitor components include a PIC microcontroller, analog-to-digital converter, bicolor LED, and oxygen sensor. The LED, affixed to the diver's mask is steady green if the oxygen partial pressure is within pre- defined acceptable limits. A more advanced monitor with a depth senor and additional computational circuitry could be used to estimate metabolic oxygen consumption. The computational algorithm uses the oxygen partial pressure and the diver's depth to compute O2 using the steady state solution of the differential equation describing oxygen concentrations within the UBA. Consequently, dive transients induce errors in the O2 estimation. To evalute these errors, we used a computer simulation of semi-closed circuit UBA dives to generate transient rich data as input to the estimation algorithm. A step change in simulated O2 elicits a monoexponential change in the estimated O2 with a time constant of 5 to 10 minutes. Methods for predicting error and providing a probable error indication to the diver are presented.

  12. Qualitative and quantitative assessment of Illumina's forensic STR and SNP kits on MiSeq FGx™.

    PubMed

    Sharma, Vishakha; Chow, Hoi Yan; Siegel, Donald; Wurmbach, Elisa

    2017-01-01

    Massively parallel sequencing (MPS) is a powerful tool transforming DNA analysis in multiple fields ranging from medicine, to environmental science, to evolutionary biology. In forensic applications, MPS offers the ability to significantly increase the discriminatory power of human identification as well as aid in mixture deconvolution. However, before the benefits of any new technology can be employed, a thorough evaluation of its quality, consistency, sensitivity, and specificity must be rigorously evaluated in order to gain a detailed understanding of the technique including sources of error, error rates, and other restrictions/limitations. This extensive study assessed the performance of Illumina's MiSeq FGx MPS system and ForenSeq™ kit in nine experimental runs including 314 reaction samples. In-depth data analysis evaluated the consequences of different assay conditions on test results. Variables included: sample numbers per run, targets per run, DNA input per sample, and replications. Results are presented as heat maps revealing patterns for each locus. Data analysis focused on read numbers (allele coverage), drop-outs, drop-ins, and sequence analysis. The study revealed that loci with high read numbers performed better and resulted in fewer drop-outs and well balanced heterozygous alleles. Several loci were prone to drop-outs which led to falsely typed homozygotes and therefore to genotype errors. Sequence analysis of allele drop-in typically revealed a single nucleotide change (deletion, insertion, or substitution). Analyses of sequences, no template controls, and spurious alleles suggest no contamination during library preparation, pooling, and sequencing, but indicate that sequencing or PCR errors may have occurred due to DNA polymerase infidelities. Finally, we found utilizing Illumina's FGx System at recommended conditions does not guarantee 100% outcomes for all samples tested, including the positive control, and required manual editing due to low read numbers and/or allele drop-in. These findings are important for progressing towards implementation of MPS in forensic DNA testing.

  13. Intravenous Chemotherapy Compounding Errors in a Follow-Up Pan-Canadian Observational Study.

    PubMed

    Gilbert, Rachel E; Kozak, Melissa C; Dobish, Roxanne B; Bourrier, Venetia C; Koke, Paul M; Kukreti, Vishal; Logan, Heather A; Easty, Anthony C; Trbovich, Patricia L

    2018-05-01

    Intravenous (IV) compounding safety has garnered recent attention as a result of high-profile incidents, awareness efforts from the safety community, and increasingly stringent practice standards. New research with more-sensitive error detection techniques continues to reinforce that error rates with manual IV compounding are unacceptably high. In 2014, our team published an observational study that described three types of previously unrecognized and potentially catastrophic latent chemotherapy preparation errors in Canadian oncology pharmacies that would otherwise be undetectable. We expand on this research and explore whether additional potential human failures are yet to be addressed by practice standards. Field observations were conducted in four cancer center pharmacies in four Canadian provinces from January 2013 to February 2015. Human factors specialists observed and interviewed pharmacy managers, oncology pharmacists, pharmacy technicians, and pharmacy assistants as they carried out their work. Emphasis was on latent errors (potential human failures) that could lead to outcomes such as wrong drug, dose, or diluent. Given the relatively short observational period, no active failures or actual errors were observed. However, 11 latent errors in chemotherapy compounding were identified. In terms of severity, all 11 errors create the potential for a patient to receive the wrong drug or dose, which in the context of cancer care, could lead to death or permanent loss of function. Three of the 11 practices were observed in our previous study, but eight were new. Applicable Canadian and international standards and guidelines do not explicitly address many of the potentially error-prone practices observed. We observed a significant degree of risk for error in manual mixing practice. These latent errors may exist in other regions where manual compounding of IV chemotherapy takes place. Continued efforts to advance standards, guidelines, technological innovation, and chemical quality testing are needed.

  14. Spacecraft and propulsion technician error

    NASA Astrophysics Data System (ADS)

    Schultz, Daniel Clyde

    Commercial aviation and commercial space similarly launch, fly, and land passenger vehicles. Unlike aviation, the U.S. government has not established maintenance policies for commercial space. This study conducted a mixed methods review of 610 U.S. space launches from 1984 through 2011, which included 31 failures. An analysis of the failure causal factors showed that human error accounted for 76% of those failures, which included workmanship error accounting for 29% of the failures. With the imminent future of commercial space travel, the increased potential for the loss of human life demands that changes be made to the standardized procedures, training, and certification to reduce human error and failure rates. Several recommendations were made by this study to the FAA's Office of Commercial Space Transportation, space launch vehicle operators, and maintenance technician schools in an effort to increase the safety of the space transportation passengers.

  15. 42 CFR 1005.23 - Harmless error.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 5 2012-10-01 2012-10-01 false Harmless error. 1005.23 Section 1005.23 Public Health OFFICE OF INSPECTOR GENERAL-HEALTH CARE, DEPARTMENT OF HEALTH AND HUMAN SERVICES OIG AUTHORITIES APPEALS OF EXCLUSIONS, CIVIL MONEY PENALTIES AND ASSESSMENTS § 1005.23 Harmless error. No error in either...

  16. 42 CFR 1005.23 - Harmless error.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 5 2014-10-01 2014-10-01 false Harmless error. 1005.23 Section 1005.23 Public Health OFFICE OF INSPECTOR GENERAL-HEALTH CARE, DEPARTMENT OF HEALTH AND HUMAN SERVICES OIG AUTHORITIES APPEALS OF EXCLUSIONS, CIVIL MONEY PENALTIES AND ASSESSMENTS § 1005.23 Harmless error. No error in either...

  17. 42 CFR 1005.23 - Harmless error.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 5 2010-10-01 2010-10-01 false Harmless error. 1005.23 Section 1005.23 Public Health OFFICE OF INSPECTOR GENERAL-HEALTH CARE, DEPARTMENT OF HEALTH AND HUMAN SERVICES OIG AUTHORITIES APPEALS OF EXCLUSIONS, CIVIL MONEY PENALTIES AND ASSESSMENTS § 1005.23 Harmless error. No error in either...

  18. 42 CFR 1005.23 - Harmless error.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 5 2013-10-01 2013-10-01 false Harmless error. 1005.23 Section 1005.23 Public Health OFFICE OF INSPECTOR GENERAL-HEALTH CARE, DEPARTMENT OF HEALTH AND HUMAN SERVICES OIG AUTHORITIES APPEALS OF EXCLUSIONS, CIVIL MONEY PENALTIES AND ASSESSMENTS § 1005.23 Harmless error. No error in either...

  19. 42 CFR 1005.23 - Harmless error.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 5 2011-10-01 2011-10-01 false Harmless error. 1005.23 Section 1005.23 Public Health OFFICE OF INSPECTOR GENERAL-HEALTH CARE, DEPARTMENT OF HEALTH AND HUMAN SERVICES OIG AUTHORITIES APPEALS OF EXCLUSIONS, CIVIL MONEY PENALTIES AND ASSESSMENTS § 1005.23 Harmless error. No error in either...

  20. 42 CFR 3.552 - Harmless error.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 1 2010-10-01 2010-10-01 false Harmless error. 3.552 Section 3.552 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL PROVISIONS PATIENT SAFETY ORGANIZATIONS AND PATIENT SAFETY WORK PRODUCT Enforcement Program § 3.552 Harmless error. No error in either the...

  1. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 1 2013-10-01 2013-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...

  2. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 1 2014-10-01 2014-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare Department of Health and Human Services GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...

  3. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 45 Public Welfare 1 2012-10-01 2012-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...

  4. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 1 2011-10-01 2011-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...

  5. Defense Mapping Agency (DMA) Raster-to-Vector Analysis

    DTIC Science & Technology

    1984-11-30

    model) to pinpoint critical deficiencies and understand trade-offs between alternative solutions. This may be exemplified by the allocation of human ...process, prone to errors (i.e., human operator eye/motor control limitations), and its time consuming nature (as a function of data density). It should...achieved through the facilities of coinputer interactive graphics. Each error or anomaly is individually identified by a human operator and corrected

  6. How we learn to make decisions: rapid propagation of reinforcement learning prediction errors in humans.

    PubMed

    Krigolson, Olav E; Hassall, Cameron D; Handy, Todd C

    2014-03-01

    Our ability to make decisions is predicated upon our knowledge of the outcomes of the actions available to us. Reinforcement learning theory posits that actions followed by a reward or punishment acquire value through the computation of prediction errors-discrepancies between the predicted and the actual reward. A multitude of neuroimaging studies have demonstrated that rewards and punishments evoke neural responses that appear to reflect reinforcement learning prediction errors [e.g., Krigolson, O. E., Pierce, L. J., Holroyd, C. B., & Tanaka, J. W. Learning to become an expert: Reinforcement learning and the acquisition of perceptual expertise. Journal of Cognitive Neuroscience, 21, 1833-1840, 2009; Bayer, H. M., & Glimcher, P. W. Midbrain dopamine neurons encode a quantitative reward prediction error signal. Neuron, 47, 129-141, 2005; O'Doherty, J. P. Reward representations and reward-related learning in the human brain: Insights from neuroimaging. Current Opinion in Neurobiology, 14, 769-776, 2004; Holroyd, C. B., & Coles, M. G. H. The neural basis of human error processing: Reinforcement learning, dopamine, and the error-related negativity. Psychological Review, 109, 679-709, 2002]. Here, we used the brain ERP technique to demonstrate that not only do rewards elicit a neural response akin to a prediction error but also that this signal rapidly diminished and propagated to the time of choice presentation with learning. Specifically, in a simple, learnable gambling task, we show that novel rewards elicited a feedback error-related negativity that rapidly decreased in amplitude with learning. Furthermore, we demonstrate the existence of a reward positivity at choice presentation, a previously unreported ERP component that has a similar timing and topography as the feedback error-related negativity that increased in amplitude with learning. The pattern of results we observed mirrored the output of a computational model that we implemented to compute reward prediction errors and the changes in amplitude of these prediction errors at the time of choice presentation and reward delivery. Our results provide further support that the computations that underlie human learning and decision-making follow reinforcement learning principles.

  7. Causal Evidence from Humans for the Role of Mediodorsal Nucleus of the Thalamus in Working Memory.

    PubMed

    Peräkylä, Jari; Sun, Lihua; Lehtimäki, Kai; Peltola, Jukka; Öhman, Juha; Möttönen, Timo; Ogawa, Keith H; Hartikainen, Kaisa M

    2017-12-01

    The mediodorsal nucleus of the thalamus (MD), with its extensive connections to the lateral pFC, has been implicated in human working memory and executive functions. However, this understanding is based solely on indirect evidence from human lesion and imaging studies and animal studies. Direct, causal evidence from humans is missing. To obtain direct evidence for MD's role in humans, we studied patients treated with deep brain stimulation (DBS) for refractory epilepsy. This treatment is thought to prevent the generalization of a seizure by disrupting the functioning of the patient's anterior nuclei of the thalamus (ANT) with high-frequency electric stimulation. This structure is located superior and anterior to MD, and when the DBS lead is implanted in ANT, tip contacts of the lead typically penetrate through ANT into the adjoining MD. To study the role of MD in human executive functions and working memory, we periodically disrupted and recovered MD's function with high-frequency electric stimulation using DBS contacts reaching MD while participants performed a cognitive task engaging several aspects of executive functions. We hypothesized that the efficacy of executive functions, specifically working memory, is impaired when the functioning of MD is perturbed by high-frequency stimulation. Eight participants treated with ANT-DBS for refractory epilepsy performed a computer-based test of executive functions while DBS was repeatedly switched ON and OFF at MD and at the control location (ANT). In comparison to stimulation of the control location, when MD was stimulated, participants committed 2.26 times more errors in general (total errors; OR = 2.26, 95% CI [1.69, 3.01]) and 2.86 times more working memory-related errors specifically (incorrect button presses; OR = 2.88, CI [1.95, 4.24]). Similarly, participants committed 1.81 more errors in general ( OR = 1.81, CI [1.45, 2.24]) and 2.08 times more working memory-related errors ( OR = 2.08, CI [1.57, 2.75]) in comparison to no stimulation condition. "Total errors" is a composite score consisting of basic error types and was mostly driven by working memory-related errors. The facts that MD and a control location, ANT, are only few millimeters away from each other and that their stimulation produces very different results highlight the location-specific effect of DBS rather than regionally unspecific general effect. In conclusion, disrupting and recovering MD's function with high-frequency electric stimulation modulated participants' online working memory performance providing causal, in vivo evidence from humans for the role of MD in human working memory.

  8. A Method for the Study of Human Factors in Aircraft Operations

    NASA Technical Reports Server (NTRS)

    Barnhart, W.; Billings, C.; Cooper, G.; Gilstrap, R.; Lauber, J.; Orlady, H.; Puskas, B.; Stephens, W.

    1975-01-01

    A method for the study of human factors in the aviation environment is described. A conceptual framework is provided within which pilot and other human errors in aircraft operations may be studied with the intent of finding out how, and why, they occurred. An information processing model of human behavior serves as the basis for the acquisition and interpretation of information relating to occurrences which involve human error. A systematic method of collecting such data is presented and discussed. The classification of the data is outlined.

  9. An interactive framework for acquiring vision models of 3-D objects from 2-D images.

    PubMed

    Motai, Yuichi; Kak, Avinash

    2004-02-01

    This paper presents a human-computer interaction (HCI) framework for building vision models of three-dimensional (3-D) objects from their two-dimensional (2-D) images. Our framework is based on two guiding principles of HCI: 1) provide the human with as much visual assistance as possible to help the human make a correct input; and 2) verify each input provided by the human for its consistency with the inputs previously provided. For example, when stereo correspondence information is elicited from a human, his/her job is facilitated by superimposing epipolar lines on the images. Although that reduces the possibility of error in the human marked correspondences, such errors are not entirely eliminated because there can be multiple candidate points close together for complex objects. For another example, when pose-to-pose correspondence is sought from a human, his/her job is made easier by allowing the human to rotate the partial model constructed in the previous pose in relation to the partial model for the current pose. While this facility reduces the incidence of human-supplied pose-to-pose correspondence errors, such errors cannot be eliminated entirely because of confusion created when multiple candidate features exist close together. Each input provided by the human is therefore checked against the previous inputs by invoking situation-specific constraints. Different types of constraints (and different human-computer interaction protocols) are needed for the extraction of polygonal features and for the extraction of curved features. We will show results on both polygonal objects and object containing curved features.

  10. New methodology to reconstruct in 2-D the cuspal enamel of modern human lower molars.

    PubMed

    Modesto-Mata, Mario; García-Campos, Cecilia; Martín-Francés, Laura; Martínez de Pinillos, Marina; García-González, Rebeca; Quintino, Yuliet; Canals, Antoni; Lozano, Marina; Dean, M Christopher; Martinón-Torres, María; Bermúdez de Castro, José María

    2017-08-01

    In the last years different methodologies have been developed to reconstruct worn teeth. In this article, we propose a new 2-D methodology to reconstruct the worn enamel of lower molars. Our main goals are to reconstruct molars with a high level of accuracy when measuring relevant histological variables and to validate the methodology calculating the errors associated with the measurements. This methodology is based on polynomial regression equations, and has been validated using two different dental variables: cuspal enamel thickness and crown height of the protoconid. In order to perform the validation process, simulated worn modern human molars were employed. The associated errors of the measurements were also estimated applying methodologies previously proposed by other authors. The mean percentage error estimated in reconstructed molars for these two variables in comparison with their own real values is -2.17% for the cuspal enamel thickness of the protoconid and -3.18% for the crown height of the protoconid. This error significantly improves the results of other methodologies, both in the interobserver error and in the accuracy of the measurements. The new methodology based on polynomial regressions can be confidently applied to the reconstruction of cuspal enamel of lower molars, as it improves the accuracy of the measurements and reduces the interobserver error. The present study shows that it is important to validate all methodologies in order to know the associated errors. This new methodology can be easily exportable to other modern human populations, the human fossil record and forensic sciences. © 2017 Wiley Periodicals, Inc.

  11. Safety and Performance Analysis of the Non-Radar Oceanic/Remote Airspace In-Trail Procedure

    NASA Technical Reports Server (NTRS)

    Carreno, Victor A.; Munoz, Cesar A.

    2007-01-01

    This document presents a safety and performance analysis of the nominal case for the In-Trail Procedure (ITP) in a non-radar oceanic/remote airspace. The analysis estimates the risk of collision between the aircraft performing the ITP and a reference aircraft. The risk of collision is only estimated for the ITP maneuver and it is based on nominal operating conditions. The analysis does not consider human error, communication error conditions, or the normal risk of flight present in current operations. The hazards associated with human error and communication errors are evaluated in an Operational Hazards Analysis presented elsewhere.

  12. Math Mistakes That Make the News

    ERIC Educational Resources Information Center

    Lewis, Heather A.

    2015-01-01

    Teachers often promote care in doing calculations, but for most students a single mistake rarely has major consequences. This article presents several real-life events in which relatively minor mathematical errors led to situations that ranged from public embarrassment to the loss of millions of dollars' worth of equipment. The stories here…

  13. An integrated error estimation and lag-aware data assimilation scheme for real-time flood forecasting

    USDA-ARS?s Scientific Manuscript database

    The performance of conventional filtering methods can be degraded by ignoring the time lag between soil moisture and discharge response when discharge observations are assimilated into streamflow modelling. This has led to the ongoing development of more optimal ways to implement sequential data ass...

  14. Outpatient Prescribing Errors and the Impact of Computerized Prescribing

    PubMed Central

    Gandhi, Tejal K; Weingart, Saul N; Seger, Andrew C; Borus, Joshua; Burdick, Elisabeth; Poon, Eric G; Leape, Lucian L; Bates, David W

    2005-01-01

    Background Medication errors are common among inpatients and many are preventable with computerized prescribing. Relatively little is known about outpatient prescribing errors or the impact of computerized prescribing in this setting. Objective To assess the rates, types, and severity of outpatient prescribing errors and understand the potential impact of computerized prescribing. Design Prospective cohort study in 4 adult primary care practices in Boston using prescription review, patient survey, and chart review to identify medication errors, potential adverse drug events (ADEs) and preventable ADEs. Participants Outpatients over age 18 who received a prescription from 24 participating physicians. Results We screened 1879 prescriptions from 1202 patients, and completed 661 surveys (response rate 55%). Of the prescriptions, 143 (7.6%; 95% confidence interval (CI) 6.4% to 8.8%) contained a prescribing error. Three errors led to preventable ADEs and 62 (43%; 3% of all prescriptions) had potential for patient injury (potential ADEs); 1 was potentially life-threatening (2%) and 15 were serious (24%). Errors in frequency (n=77, 54%) and dose (n=26, 18%) were common. The rates of medication errors and potential ADEs were not significantly different at basic computerized prescribing sites (4.3% vs 11.0%, P=.31; 2.6% vs 4.0%, P=.16) compared to handwritten sites. Advanced checks (including dose and frequency checking) could have prevented 95% of potential ADEs. Conclusions Prescribing errors occurred in 7.6% of outpatient prescriptions and many could have harmed patients. Basic computerized prescribing systems may not be adequate to reduce errors. More advanced systems with dose and frequency checking are likely needed to prevent potentially harmful errors. PMID:16117752

  15. Types of diagnostic errors in neurological emergencies in the emergency department.

    PubMed

    Dubosh, Nicole M; Edlow, Jonathan A; Lefton, Micah; Pope, Jennifer V

    2015-02-01

    Neurological emergencies often pose diagnostic challenges for emergency physicians because these patients often present with atypical symptoms and standard imaging tests are imperfect. Misdiagnosis occurs due to a variety of errors. These can be classified as knowledge gaps, cognitive errors, and systems-based errors. The goal of this study was to describe these errors through review of quality assurance (QA) records. This was a retrospective pilot study of patients with neurological emergency diagnoses that were missed or delayed at one urban, tertiary academic emergency department. Cases meeting inclusion criteria were identified through review of QA records. Three emergency physicians independently reviewed each case and determined the type of error that led to the misdiagnosis. Proportions, confidence intervals, and a reliability coefficient were calculated. During the study period, 1168 cases were reviewed. Forty-two cases were found to include a neurological misdiagnosis and twenty-nine were determined to be the result of an error. The distribution of error types was as follows: knowledge gap 45.2% (95% CI 29.2, 62.2), cognitive error 29.0% (95% CI 15.9, 46.8), and systems-based error 25.8% (95% CI 13.5, 43.5). Cerebellar strokes were the most common type of stroke misdiagnosed, accounting for 27.3% of missed strokes. All three error types contributed to the misdiagnosis of neurological emergencies. Misdiagnosis of cerebellar lesions and erroneous radiology resident interpretations of neuroimaging were the most common mistakes. Understanding the types of errors may enable emergency physicians to develop possible solutions and avoid them in the future.

  16. High-power, red-light-emitting diode irradiation enhances proliferation, osteogenic differentiation, and mineralization of human periodontal ligament stem cells via ERK signaling pathway.

    PubMed

    Yamauchi, Nobuhiro; Taguchi, Yoichiro; Kato, Hirohito; Umeda, Makoto

    2018-03-01

    Light-emitting diode (LED) is attracting attention as a new light source for phototherapy. However, its effects on periodontal tissue regeneration remain unknown. The aim of this study was to examine the effects of high-power, red LED irradiation on human periodontal ligament stem cells (PDLSCs), which play an important role in periodontal tissue regeneration. PDLSCs were derived from adult human third molars. The light source was red LED (peak wavelength: 650 nm). Energy densities ranging from 0 to 10 J/cm 2 were tested to determine the optimal dose. PDLSC proliferation was measured using two parameters: live cell protease and ATP levels. After the cells were induced to differentiate, the effect of LED irradiation on osteogenic differentiation and mineralization was examined, with particular focus on the extracellular signal-regulated kinase (ERK)1/2 signaling pathway using an ERK inhibitor (PD98059). LED irradiation at 8 J/cm 2 led to a significant increase in PDLSC proliferation and enhanced Runx2 and Osterix mRNA expression, Alkaline phosphatase activity, procollagen type I C-peptide and osteocalcin production, calcium deposition, and alizarin red S staining. In addition, LED induced the activation of ERK1/2, and the effects of LED on PDLSC proliferation, differentiation, and mineralization could be suppressed by treatment with PD98059. The results of this study show that 650-nm high-power, red, LED irradiation increases PDLSCs proliferation, and osteogenic differentiation and mineralization, mediated by ERK1/2 activation. These findings suggest that LED may be a useful tool for periodontal tissue regeneration. © 2018 American Academy of Periodontology.

  17. A cognitive taxonomy of medical errors.

    PubMed

    Zhang, Jiajie; Patel, Vimla L; Johnson, Todd R; Shortliffe, Edward H

    2004-06-01

    Propose a cognitive taxonomy of medical errors at the level of individuals and their interactions with technology. Use cognitive theories of human error and human action to develop the theoretical foundations of the taxonomy, develop the structure of the taxonomy, populate the taxonomy with examples of medical error cases, identify cognitive mechanisms for each category of medical error under the taxonomy, and apply the taxonomy to practical problems. Four criteria were used to evaluate the cognitive taxonomy. The taxonomy should be able (1) to categorize major types of errors at the individual level along cognitive dimensions, (2) to associate each type of error with a specific underlying cognitive mechanism, (3) to describe how and explain why a specific error occurs, and (4) to generate intervention strategies for each type of error. The proposed cognitive taxonomy largely satisfies the four criteria at a theoretical and conceptual level. Theoretically, the proposed cognitive taxonomy provides a method to systematically categorize medical errors at the individual level along cognitive dimensions, leads to a better understanding of the underlying cognitive mechanisms of medical errors, and provides a framework that can guide future studies on medical errors. Practically, it provides guidelines for the development of cognitive interventions to decrease medical errors and foundation for the development of medical error reporting system that not only categorizes errors but also identifies problems and helps to generate solutions. To validate this model empirically, we will next be performing systematic experimental studies.

  18. Indoor visible light communication with smart lighting technology

    NASA Astrophysics Data System (ADS)

    Das Barman, Abhirup; Halder, Alak

    2017-02-01

    An indoor visible-light communication performance is investigated utilizing energy efficient white light by 2D LED arrays. Enabled by recent advances in LED technology, IEEE 802.15.7 standardizes high-data-rate visible light communication and advocates for colour shift keying (CSK) modulation to overcome flicker and to support dimming. Voronoi segmentation is employed for decoding N-CSK constellation which has superior performance compared to other existing decoding methods. The two chief performance degrading effects of inter-symbol interference and LED nonlinearity is jointly mitigated using LMS post equalization at the receiver which improves the symbol error rate performance and increases field of view of the receiver. It is found that LMS post equalization symbol at 250MHz offers 7dB SNR improvement at SER10-6

  19. Differential sensitivity to human communication in dogs, wolves, and human infants.

    PubMed

    Topál, József; Gergely, György; Erdohegyi, Agnes; Csibra, Gergely; Miklósi, Adám

    2009-09-04

    Ten-month-old infants persistently search for a hidden object at its initial hiding place even after observing it being hidden at another location. Recent evidence suggests that communicative cues from the experimenter contribute to the emergence of this perseverative search error. We replicated these results with dogs (Canis familiaris), who also commit more search errors in ostensive-communicative (in 75% of the total trials) than in noncommunicative (39%) or nonsocial (17%) hiding contexts. However, comparative investigations suggest that communicative signals serve different functions for dogs and infants, whereas human-reared wolves (Canis lupus) do not show doglike context-dependent differences of search errors. We propose that shared sensitivity to human communicative signals stems from convergent social evolution of the Homo and the Canis genera.

  20. Metrics for Business Process Models

    NASA Astrophysics Data System (ADS)

    Mendling, Jan

    Up until now, there has been little research on why people introduce errors in real-world business process models. In a more general context, Simon [404] points to the limitations of cognitive capabilities and concludes that humans act rationally only to a certain extent. Concerning modeling errors, this argument would imply that human modelers lose track of the interrelations of large and complex models due to their limited cognitive capabilities and introduce errors that they would not insert in a small model. A recent study by Mendling et al. [275] explores in how far certain complexity metrics of business process models have the potential to serve as error determinants. The authors conclude that complexity indeed appears to have an impact on error probability. Before we can test such a hypothesis in a more general setting, we have to establish an understanding of how we can define determinants that drive error probability and how we can measure them.

  1. Hierarchical Learning Induces Two Simultaneous, But Separable, Prediction Errors in Human Basal Ganglia

    PubMed Central

    Tsai, Karin; Wallis, Jonathan; Botvinick, Matthew

    2013-01-01

    Studies suggest that dopaminergic neurons report a unitary, global reward prediction error signal. However, learning in complex real-life tasks, in particular tasks that show hierarchical structure, requires multiple prediction errors that may coincide in time. We used functional neuroimaging to measure prediction error signals in humans performing such a hierarchical task involving simultaneous, uncorrelated prediction errors. Analysis of signals in a priori anatomical regions of interest in the ventral striatum and the ventral tegmental area indeed evidenced two simultaneous, but separable, prediction error signals corresponding to the two levels of hierarchy in the task. This result suggests that suitably designed tasks may reveal a more intricate pattern of firing in dopaminergic neurons. Moreover, the need for downstream separation of these signals implies possible limitations on the number of different task levels that we can learn about simultaneously. PMID:23536092

  2. Associations between errors and contributing factors in aircraft maintenance

    NASA Technical Reports Server (NTRS)

    Hobbs, Alan; Williamson, Ann

    2003-01-01

    In recent years cognitive error models have provided insights into the unsafe acts that lead to many accidents in safety-critical environments. Most models of accident causation are based on the notion that human errors occur in the context of contributing factors. However, there is a lack of published information on possible links between specific errors and contributing factors. A total of 619 safety occurrences involving aircraft maintenance were reported using a self-completed questionnaire. Of these occurrences, 96% were related to the actions of maintenance personnel. The types of errors that were involved, and the contributing factors associated with those actions, were determined. Each type of error was associated with a particular set of contributing factors and with specific occurrence outcomes. Among the associations were links between memory lapses and fatigue and between rule violations and time pressure. Potential applications of this research include assisting with the design of accident prevention strategies, the estimation of human error probabilities, and the monitoring of organizational safety performance.

  3. Neuroanatomical dissociation for taxonomic and thematic knowledge in the human brain

    PubMed Central

    Schwartz, Myrna F.; Kimberg, Daniel Y.; Walker, Grant M.; Brecher, Adelyn; Faseyitan, Olufunsho K.; Dell, Gary S.; Mirman, Daniel; Coslett, H. Branch

    2011-01-01

    It is thought that semantic memory represents taxonomic information differently from thematic information. This study investigated the neural basis for the taxonomic-thematic distinction in a unique way. We gathered picture-naming errors from 86 individuals with poststroke language impairment (aphasia). Error rates were determined separately for taxonomic errors (“pear” in response to apple) and thematic errors (“worm” in response to apple), and their shared variance was regressed out of each measure. With the segmented lesions normalized to a common template, we carried out voxel-based lesion-symptom mapping on each error type separately. We found that taxonomic errors localized to the left anterior temporal lobe and thematic errors localized to the left temporoparietal junction. This is an indication that the contribution of these regions to semantic memory cleaves along taxonomic-thematic lines. Our findings show that a distinction long recognized in the psychological sciences is grounded in the structure and function of the human brain. PMID:21540329

  4. Interspecies scaling and prediction of human clearance: comparison of small- and macro-molecule drugs

    PubMed Central

    Huh, Yeamin; Smith, David E.; Feng, Meihau Rose

    2014-01-01

    Human clearance prediction for small- and macro-molecule drugs was evaluated and compared using various scaling methods and statistical analysis.Human clearance is generally well predicted using single or multiple species simple allometry for macro- and small-molecule drugs excreted renally.The prediction error is higher for hepatically eliminated small-molecules using single or multiple species simple allometry scaling, and it appears that the prediction error is mainly associated with drugs with low hepatic extraction ratio (Eh). The error in human clearance prediction for hepatically eliminated small-molecules was reduced using scaling methods with a correction of maximum life span (MLP) or brain weight (BRW).Human clearance of both small- and macro-molecule drugs is well predicted using the monkey liver blood flow method. Predictions using liver blood flow from other species did not work as well, especially for the small-molecule drugs. PMID:21892879

  5. Photogrammetry experiments with a model eye.

    PubMed Central

    Rosenthal, A R; Falconer, D G; Pieper, I

    1980-01-01

    Digital photogrammetry was performed on stereophotographs of the optic nerve head of a modified Zeiss model eye in which optic cups of varying depths could be simulated. Experiments were undertaken to determine the impact of both photographic and ocular variables on the photogrammetric measurements of cup depth. The photogrammetric procedure tolerates refocusing, repositioning, and realignment as well as small variations in the geometric position of the camera. Progressive underestimation of cup depth was observed with increasing myopia, while progressive overestimation was noted with increasing hyperopia. High cylindrical errors at axis 90 degrees led to significant errors in cup depth estimates, while high cylindrical errors at axis 180 degrees did not materially affect the accuracy of the analysis. Finally, cup depths were seriously underestimated when the pupil diameter was less than 5.0 mm. Images PMID:7448139

  6. Barriers to the medication error reporting process within the Irish National Ambulance Service, a focus group study.

    PubMed

    Byrne, Eamonn; Bury, Gerard

    2018-02-08

    Incident reporting is vital to identifying pre-hospital medication safety issues because literature suggests that the majority of errors pre-hospital are self-identified. In 2016, the National Ambulance Service (NAS) reported 11 medication errors to the national body with responsibility for risk management and insurance cover. The Health Information and Quality Authority in 2014 stated that reporting of clinical incidents, of which medication errors are a subset, was not felt to be representative of the actual events occurring. Even though reporting systems are in place, the levels appear to be well below what might be expected. Little data is available to explain this apparent discrepancy. To identify, investigate and document the barriers to medication error reporting within the NAS. An independent moderator led four focus groups in March of 2016. A convenience sample of 18 frontline Paramedics and Advanced Paramedics from Cork City and County discussed medication errors and the medication error reporting process. The sessions were recorded and anonymised, and the data was analysed using a process of thematic analysis. Practitioners understood the value of reporting errors. Barriers to reporting included fear of consequences and ridicule, procedural ambiguity, lack of feedback and a perceived lack of both consistency and confidentiality. The perceived consequences for making an error included professional, financial, litigious and psychological. Staff appeared willing to admit errors in a psychologically safe environment. Barriers to reporting are in line with international evidence. Time constraints prevented achievement of thematic saturation. Further study is warranted.

  7. BRDF-dependent accuracy of array-projection-based 3D sensors.

    PubMed

    Heist, Stefan; Kühmstedt, Peter; Tünnermann, Andreas; Notni, Gunther

    2017-03-10

    In order to perform high-speed three-dimensional (3D) shape measurements with structured light systems, high-speed projectors are required. One possibility is an array projector, which allows pattern projection at several tens of kilohertz by switching on and off the LEDs of various slide projectors. The different projection centers require a separate analysis, as the intensity received by the cameras depends on the projection direction and the object's bidirectional reflectance distribution function (BRDF). In this contribution, we investigate the BRDF-dependent errors of array-projection-based 3D sensors and propose an error compensation process.

  8. A pharmacist-led information technology intervention for medication errors (PINCER): a multicentre, cluster randomised, controlled trial and cost-effectiveness analysis.

    PubMed

    Avery, Anthony J; Rodgers, Sarah; Cantrill, Judith A; Armstrong, Sarah; Cresswell, Kathrin; Eden, Martin; Elliott, Rachel A; Howard, Rachel; Kendrick, Denise; Morris, Caroline J; Prescott, Robin J; Swanwick, Glen; Franklin, Matthew; Putman, Koen; Boyd, Matthew; Sheikh, Aziz

    2012-04-07

    Medication errors are common in primary care and are associated with considerable risk of patient harm. We tested whether a pharmacist-led, information technology-based intervention was more effective than simple feedback in reducing the number of patients at risk of measures related to hazardous prescribing and inadequate blood-test monitoring of medicines 6 months after the intervention. In this pragmatic, cluster randomised trial general practices in the UK were stratified by research site and list size, and randomly assigned by a web-based randomisation service in block sizes of two or four to one of two groups. The practices were allocated to either computer-generated simple feedback for at-risk patients (control) or a pharmacist-led information technology intervention (PINCER), composed of feedback, educational outreach, and dedicated support. The allocation was masked to researchers and statisticians involved in processing and analysing the data. The allocation was not masked to general practices, pharmacists, patients, or researchers who visited practices to extract data. [corrected]. Primary outcomes were the proportions of patients at 6 months after the intervention who had had any of three clinically important errors: non-selective non-steroidal anti-inflammatory drugs (NSAIDs) prescribed to those with a history of peptic ulcer without co-prescription of a proton-pump inhibitor; β blockers prescribed to those with a history of asthma; long-term prescription of angiotensin converting enzyme (ACE) inhibitor or loop diuretics to those 75 years or older without assessment of urea and electrolytes in the preceding 15 months. The cost per error avoided was estimated by incremental cost-effectiveness analysis. This study is registered with Controlled-Trials.com, number ISRCTN21785299. 72 general practices with a combined list size of 480,942 patients were randomised. At 6 months' follow-up, patients in the PINCER group were significantly less likely to have been prescribed a non-selective NSAID if they had a history of peptic ulcer without gastroprotection (OR 0·58, 95% CI 0·38-0·89); a β blocker if they had asthma (0·73, 0·58-0·91); or an ACE inhibitor or loop diuretic without appropriate monitoring (0·51, 0·34-0·78). PINCER has a 95% probability of being cost effective if the decision-maker's ceiling willingness to pay reaches £75 per error avoided at 6 months. The PINCER intervention is an effective method for reducing a range of medication errors in general practices with computerised clinical records. Patient Safety Research Portfolio, Department of Health, England. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Fifty Years of THERP and Human Reliability Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronald L. Boring

    2012-06-01

    In 1962 at a Human Factors Society symposium, Alan Swain presented a paper introducing a Technique for Human Error Rate Prediction (THERP). This was followed in 1963 by a Sandia Laboratories monograph outlining basic human error quantification using THERP and, in 1964, by a special journal edition of Human Factors on quantification of human performance. Throughout the 1960s, Swain and his colleagues focused on collecting human performance data for the Sandia Human Error Rate Bank (SHERB), primarily in connection with supporting the reliability of nuclear weapons assembly in the US. In 1969, Swain met with Jens Rasmussen of Risø Nationalmore » Laboratory and discussed the applicability of THERP to nuclear power applications. By 1975, in WASH-1400, Swain had articulated the use of THERP for nuclear power applications, and the approach was finalized in the watershed publication of the NUREG/CR-1278 in 1983. THERP is now 50 years old, and remains the most well known and most widely used HRA method. In this paper, the author discusses the history of THERP, based on published reports and personal communication and interviews with Swain. The author also outlines the significance of THERP. The foundations of human reliability analysis are found in THERP: human failure events, task analysis, performance shaping factors, human error probabilities, dependence, event trees, recovery, and pre- and post-initiating events were all introduced in THERP. While THERP is not without its detractors, and it is showing signs of its age in the face of newer technological applications, the longevity of THERP is a testament of its tremendous significance. THERP started the field of human reliability analysis. This paper concludes with a discussion of THERP in the context of newer methods, which can be seen as extensions of or departures from Swain’s pioneering work.« less

  10. Error-free replicative bypass of (6–4) photoproducts by DNA polymerase ζ in mouse and human cells

    PubMed Central

    Yoon, Jung-Hoon; Prakash, Louise; Prakash, Satya

    2010-01-01

    The ultraviolet (UV)-induced (6–4) pyrimidine–pyrimidone photoproduct [(6–4) PP] confers a large structural distortion in DNA. Here we examine in human cells the roles of translesion synthesis (TLS) DNA polymerases (Pols) in promoting replication through a (6–4) TT photoproduct carried on a duplex plasmid where bidirectional replication initiates from an origin of replication. We show that TLS contributes to a large fraction of lesion bypass and that it is mostly error-free. We find that, whereas Pol η and Pol ι provide alternate pathways for mutagenic TLS, surprisingly, Pol ζ functions independently of these Pols and in a predominantly error-free manner. We verify and extend these observations in mouse cells and conclude that, in human cells, TLS during replication can be markedly error-free even opposite a highly distorting DNA lesion. PMID:20080950

  11. Design of compact freeform lens for application specific Light-Emitting Diode packaging.

    PubMed

    Wang, Kai; Chen, Fei; Liu, Zongyuan; Luo, Xiaobing; Liu, Sheng

    2010-01-18

    Application specific LED packaging (ASLP) is an emerging technology for high performance LED lighting. We introduced a practical design method of compact freeform lens for extended sources used in ASLP. A new ASLP for road lighting was successfully obtained by integrating a polycarbonate compact freeform lens of small form factor with traditional LED packaging. Optical performance of the ASLP was investigated by both numerical simulation based on Monte Carlo ray tracing method and experiments. Results demonstrated that, comparing with traditional LED module integrated with secondary optics, the ASLP had advantages of much smaller size in volume (approximately 1/8), higher system lumen efficiency (approximately 8.1%), lower cost and more convenience for customers to design and assembly, enabling possible much wider applications of LED for general road lighting. Tolerance analyses were also conducted. Installation errors of horizontal and vertical deviations had more effects on the shape and uniformity of radiation pattern compared with rotational deviation. The tolerances of horizontal, vertical and rotational deviations of this lens were 0.11 mm, 0.14 mm and 2.4 degrees respectively, which were acceptable in engineering.

  12. A Framework for Modeling Human-Machine Interactions

    NASA Technical Reports Server (NTRS)

    Shafto, Michael G.; Rosekind, Mark R. (Technical Monitor)

    1996-01-01

    Modern automated flight-control systems employ a variety of different behaviors, or modes, for managing the flight. While developments in cockpit automation have resulted in workload reduction and economical advantages, they have also given rise to an ill-defined class of human-machine problems, sometimes referred to as 'automation surprises'. Our interest in applying formal methods for describing human-computer interaction stems from our ongoing research on cockpit automation. In this area of aeronautical human factors, there is much concern about how flight crews interact with automated flight-control systems, so that the likelihood of making errors, in particular mode-errors, is minimized and the consequences of such errors are contained. The goal of the ongoing research on formal methods in this context is: (1) to develop a framework for describing human interaction with control systems; (2) to formally categorize such automation surprises; and (3) to develop tests for identification of these categories early in the specification phase of a new human-machine system.

  13. Inborn Errors of Human JAKs and STATs

    PubMed Central

    Casanova, Jean-Laurent; Holland, Steven M.; Notarangelo, Luigi D.

    2012-01-01

    Inborn errors of the genes encoding two of the four human JAKs (JAK3 and TYK2) and three of the six human STATs (STAT1, STAT3, and STAT5B) have been described. We review the disorders arising from mutations in these five genes, highlighting the way in which the molecular and cellular pathogenesis of these conditions has been clarified by the discovery of inborn errors of cytokines, hormones, and their receptors, including those interacting with JAKs and STATs. The phenotypic similarities between mice and humans lacking individual JAK-STAT components suggest that the functions of JAKs and STATs are largely conserved in mammals. However, a wide array of phenotypic differences has emerged between mice and humans carrying bi-allelic null alleles of JAK3, TYK2, STAT1, or STAT5B. Moreover, the high level of allelic heterogeneity at the human JAK3, STAT1, and STAT3 loci has revealed highly diverse immunological and clinical phenotypes, which had not been anticipated. PMID:22520845

  14. Inborn errors of human JAKs and STATs.

    PubMed

    Casanova, Jean-Laurent; Holland, Steven M; Notarangelo, Luigi D

    2012-04-20

    Inborn errors of the genes encoding two of the four human JAKs (JAK3 and TYK2) and three of the six human STATs (STAT1, STAT3, and STAT5B) have been described. We review the disorders arising from mutations in these five genes, highlighting the way in which the molecular and cellular pathogenesis of these conditions has been clarified by the discovery of inborn errors of cytokines, hormones, and their receptors, including those interacting with JAKs and STATs. The phenotypic similarities between mice and humans lacking individual JAK-STAT components suggest that the functions of JAKs and STATs are largely conserved in mammals. However, a wide array of phenotypic differences has emerged between mice and humans carrying biallelic null alleles of JAK3, TYK2, STAT1, or STAT5B. Moreover, the high degree of allelic heterogeneity at the human JAK3, TYK2, STAT1, and STAT3 loci has revealed highly diverse immunological and clinical phenotypes, which had not been anticipated. Copyright © 2012 Elsevier Inc. All rights reserved.

  15. Three-dimensional high-precision indoor positioning strategy using Tabu search based on visible light communication

    NASA Astrophysics Data System (ADS)

    Peng, Qi; Guan, Weipeng; Wu, Yuxiang; Cai, Ye; Xie, Canyu; Wang, Pengfei

    2018-01-01

    This paper proposes a three-dimensional (3-D) high-precision indoor positioning strategy using Tabu search based on visible light communication. Tabu search is a powerful global optimization algorithm, and the 3-D indoor positioning can be transformed into an optimal solution problem. Therefore, in the 3-D indoor positioning, the optimal receiver coordinate can be obtained by the Tabu search algorithm. For all we know, this is the first time the Tabu search algorithm is applied to visible light positioning. Each light-emitting diode (LED) in the system broadcasts a unique identity (ID) and transmits the ID information. When the receiver detects optical signals with ID information from different LEDs, using the global optimization of the Tabu search algorithm, the 3-D high-precision indoor positioning can be realized when the fitness value meets certain conditions. Simulation results show that the average positioning error is 0.79 cm, and the maximum error is 5.88 cm. The extended experiment of trajectory tracking also shows that 95.05% positioning errors are below 1.428 cm. It can be concluded from the data that the 3-D indoor positioning based on the Tabu search algorithm achieves the requirements of centimeter level indoor positioning. The algorithm used in indoor positioning is very effective and practical and is superior to other existing methods for visible light indoor positioning.

  16. Use of modeling to identify vulnerabilities to human error in laparoscopy.

    PubMed

    Funk, Kenneth H; Bauer, James D; Doolen, Toni L; Telasha, David; Nicolalde, R Javier; Reeber, Miriam; Yodpijit, Nantakrit; Long, Myra

    2010-01-01

    This article describes an exercise to investigate the utility of modeling and human factors analysis in understanding surgical processes and their vulnerabilities to medical error. A formal method to identify error vulnerabilities was developed and applied to a test case of Veress needle insertion during closed laparoscopy. A team of 2 surgeons, a medical assistant, and 3 engineers used hierarchical task analysis and Integrated DEFinition language 0 (IDEF0) modeling to create rich models of the processes used in initial port creation. Using terminology from a standardized human performance database, detailed task descriptions were written for 4 tasks executed in the process of inserting the Veress needle. Key terms from the descriptions were used to extract from the database generic errors that could occur. Task descriptions with potential errors were translated back into surgical terminology. Referring to the process models and task descriptions, the team used a modified failure modes and effects analysis (FMEA) to consider each potential error for its probability of occurrence, its consequences if it should occur and be undetected, and its probability of detection. The resulting likely and consequential errors were prioritized for intervention. A literature-based validation study confirmed the significance of the top error vulnerabilities identified using the method. Ongoing work includes design and evaluation of procedures to correct the identified vulnerabilities and improvements to the modeling and vulnerability identification methods. Copyright 2010 AAGL. Published by Elsevier Inc. All rights reserved.

  17. The role of the cerebellum in sub- and supraliminal error correction during sensorimotor synchronization: evidence from fMRI and TMS.

    PubMed

    Bijsterbosch, Janine D; Lee, Kwang-Hyuk; Hunter, Michael D; Tsoi, Daniel T; Lankappa, Sudheer; Wilkinson, Iain D; Barker, Anthony T; Woodruff, Peter W R

    2011-05-01

    Our ability to interact physically with objects in the external world critically depends on temporal coupling between perception and movement (sensorimotor timing) and swift behavioral adjustment to changes in the environment (error correction). In this study, we investigated the neural correlates of the correction of subliminal and supraliminal phase shifts during a sensorimotor synchronization task. In particular, we focused on the role of the cerebellum because this structure has been shown to play a role in both motor timing and error correction. Experiment 1 used fMRI to show that the right cerebellar dentate nucleus and primary motor and sensory cortices were activated during regular timing and during the correction of subliminal errors. The correction of supraliminal phase shifts led to additional activations in the left cerebellum and right inferior parietal and frontal areas. Furthermore, a psychophysiological interaction analysis revealed that supraliminal error correction was associated with enhanced connectivity of the left cerebellum with frontal, auditory, and sensory cortices and with the right cerebellum. Experiment 2 showed that suppression of the left but not the right cerebellum with theta burst TMS significantly affected supraliminal error correction. These findings provide evidence that the left lateral cerebellum is essential for supraliminal error correction during sensorimotor synchronization.

  18. Detection of Error Related Neuronal Responses Recorded by Electrocorticography in Humans during Continuous Movements

    PubMed Central

    Milekovic, Tomislav; Ball, Tonio; Schulze-Bonhage, Andreas; Aertsen, Ad; Mehring, Carsten

    2013-01-01

    Background Brain-machine interfaces (BMIs) can translate the neuronal activity underlying a user’s movement intention into movements of an artificial effector. In spite of continuous improvements, errors in movement decoding are still a major problem of current BMI systems. If the difference between the decoded and intended movements becomes noticeable, it may lead to an execution error. Outcome errors, where subjects fail to reach a certain movement goal, are also present during online BMI operation. Detecting such errors can be beneficial for BMI operation: (i) errors can be corrected online after being detected and (ii) adaptive BMI decoding algorithm can be updated to make fewer errors in the future. Methodology/Principal Findings Here, we show that error events can be detected from human electrocorticography (ECoG) during a continuous task with high precision, given a temporal tolerance of 300–400 milliseconds. We quantified the error detection accuracy and showed that, using only a small subset of 2×2 ECoG electrodes, 82% of detection information for outcome error and 74% of detection information for execution error available from all ECoG electrodes could be retained. Conclusions/Significance The error detection method presented here could be used to correct errors made during BMI operation or to adapt a BMI algorithm to make fewer errors in the future. Furthermore, our results indicate that smaller ECoG implant could be used for error detection. Reducing the size of an ECoG electrode implant used for BMI decoding and error detection could significantly reduce the medical risk of implantation. PMID:23383315

  19. Inborn Errors of Fructose Metabolism. What Can We Learn from Them?

    PubMed

    Tran, Christel

    2017-04-03

    Fructose is one of the main sweetening agents in the human diet and its ingestion is increasing globally. Dietary sugar has particular effects on those whose capacity to metabolize fructose is limited. If intolerance to carbohydrates is a frequent finding in children, inborn errors of carbohydrate metabolism are rare conditions. Three inborn errors are known in the pathway of fructose metabolism; (1) essential or benign fructosuria due to fructokinase deficiency; (2) hereditary fructose intolerance; and (3) fructose-1,6-bisphosphatase deficiency. In this review the focus is set on the description of the clinical symptoms and biochemical anomalies in the three inborn errors of metabolism. The potential toxic effects of fructose in healthy humans also are discussed. Studies conducted in patients with inborn errors of fructose metabolism helped to understand fructose metabolism and its potential toxicity in healthy human. Influence of fructose on the glycolytic pathway and on purine catabolism is the cause of hypoglycemia, lactic acidosis and hyperuricemia. The discovery that fructose-mediated generation of uric acid may have a causal role in diabetes and obesity provided new understandings into pathogenesis for these frequent diseases.

  20. Inborn Errors of Fructose Metabolism. What Can We Learn from Them?

    PubMed Central

    Tran, Christel

    2017-01-01

    Fructose is one of the main sweetening agents in the human diet and its ingestion is increasing globally. Dietary sugar has particular effects on those whose capacity to metabolize fructose is limited. If intolerance to carbohydrates is a frequent finding in children, inborn errors of carbohydrate metabolism are rare conditions. Three inborn errors are known in the pathway of fructose metabolism; (1) essential or benign fructosuria due to fructokinase deficiency; (2) hereditary fructose intolerance; and (3) fructose-1,6-bisphosphatase deficiency. In this review the focus is set on the description of the clinical symptoms and biochemical anomalies in the three inborn errors of metabolism. The potential toxic effects of fructose in healthy humans also are discussed. Studies conducted in patients with inborn errors of fructose metabolism helped to understand fructose metabolism and its potential toxicity in healthy human. Influence of fructose on the glycolytic pathway and on purine catabolism is the cause of hypoglycemia, lactic acidosis and hyperuricemia. The discovery that fructose-mediated generation of uric acid may have a causal role in diabetes and obesity provided new understandings into pathogenesis for these frequent diseases. PMID:28368361

  1. Effect of NASA light-emitting diode irradiation on wound healing.

    PubMed

    Whelan, H T; Smits, R L; Buchman, E V; Whelan, N T; Turner, S G; Margolis, D A; Cevenini, V; Stinson, H; Ignatius, R; Martin, T; Cwiklinski, J; Philippi, A F; Graf, W R; Hodgson, B; Gould, L; Kane, M; Chen, G; Caviness, J

    2001-12-01

    The purpose of this study was to assess the effects of hyperbaric oxygen (HBO) and near-infrared light therapy on wound healing. Light-emitting diodes (LED), originally developed for NASA plant growth experiments in space show promise for delivering light deep into tissues of the body to promote wound healing and human tissue growth. In this paper, we review and present our new data of LED treatment on cells grown in culture, on ischemic and diabetic wounds in rat models, and on acute and chronic wounds in humans. In vitro and in vivo (animal and human) studies utilized a variety of LED wavelength, power intensity, and energy density parameters to begin to identify conditions for each biological tissue that are optimal for biostimulation. LED produced in vitro increases of cell growth of 140-200% in mouse-derived fibroblasts, rat-derived osteoblasts, and rat-derived skeletal muscle cells, and increases in growth of 155-171% of normal human epithelial cells. Wound size decreased up to 36% in conjunction with HBO in ischemic rat models. LED produced improvement of greater than 40% in musculoskeletal training injuries in Navy SEAL team members, and decreased wound healing time in crew members aboard a U.S. Naval submarine. LED produced a 47% reduction in pain of children suffering from oral mucositis. We believe that the use of NASA LED for light therapy alone, and in conjunction with hyperbaric oxygen, will greatly enhance the natural wound healing process, and more quickly return the patient to a preinjury/illness level of activity. This work is supported and managed through the NASA Marshall Space Flight Center-SBIR Program.

  2. Using lean "automation with a human touch" to improve medication safety: a step closer to the "perfect dose".

    PubMed

    Ching, Joan M; Williams, Barbara L; Idemoto, Lori M; Blackmore, C Craig

    2014-08-01

    Virginia Mason Medical Center (Seattle) employed the Lean concept of Jidoka (automation with a human touch) to plan for and deploy bar code medication administration (BCMA) to hospitalized patients. Integrating BCMA technology into the nursing work flow with minimal disruption was accomplished using three steps ofJidoka: (1) assigning work to humans and machines on the basis of their differing abilities, (2) adapting machines to the human work flow, and (3) monitoring the human-machine interaction. Effectiveness of BCMA to both reinforce safe administration practices and reduce medication errors was measured using the Collaborative Alliance for Nursing Outcomes (CALNOC) Medication Administration Accuracy Quality Study methodology. Trained nurses observed a total of 16,149 medication doses for 3,617 patients in a three-year period. Following BCMA implementation, the number of safe practice violations decreased from 54.8 violations/100 doses (January 2010-September 2011) to 29.0 violations/100 doses (October 2011-December 2012), resulting in an absolute risk reduction of 25.8 violations/100 doses (95% confidence interval [CI]: 23.7, 27.9, p < .001). The number of medication errors decreased from 5.9 errors/100 doses at baseline to 3.0 errors/100 doses after BCMA implementation (absolute risk reduction: 2.9 errors/100 doses [95% CI: 2.2, 3.6,p < .001]). The number of unsafe administration practices (estimate, -5.481; standard error 1.133; p < .001; 95% CI: -7.702, -3.260) also decreased. As more hospitals respond to health information technology meaningful use incentives, thoughtful, methodical, and well-managed approaches to technology deployment are crucial. This work illustrates how Jidoka offers opportunities for a smooth transition to new technology.

  3. Evaluating the Performance Diagnostic Checklist-Human Services to Assess Incorrect Error-Correction Procedures by Preschool Paraprofessionals

    ERIC Educational Resources Information Center

    Bowe, Melissa; Sellers, Tyra P.

    2018-01-01

    The Performance Diagnostic Checklist-Human Services (PDC-HS) has been used to assess variables contributing to undesirable staff performance. In this study, three preschool teachers completed the PDC-HS to identify the factors contributing to four paraprofessionals' inaccurate implementation of error-correction procedures during discrete trial…

  4. The Importance of HRA in Human Space Flight: Understanding the Risks

    NASA Technical Reports Server (NTRS)

    Hamlin, Teri

    2010-01-01

    Human performance is critical to crew safety during space missions. Humans interact with hardware and software during ground processing, normal flight, and in response to events. Human interactions with hardware and software can cause Loss of Crew and/or Vehicle (LOCV) through improper actions, or may prevent LOCV through recovery and control actions. Humans have the ability to deal with complex situations and system interactions beyond the capability of machines. Human Reliability Analysis (HRA) is a method used to qualitatively and quantitatively assess the occurrence of human failures that affect availability and reliability of complex systems. Modeling human actions with their corresponding failure probabilities in a Probabilistic Risk Assessment (PRA) provides a more complete picture of system risks and risk contributions. A high-quality HRA can provide valuable information on potential areas for improvement, including training, procedures, human interfaces design, and the need for automation. Modeling human error has always been a challenge in part because performance data is not always readily available. For spaceflight, the challenge is amplified not only because of the small number of participants and limited amount of performance data available, but also due to the lack of definition of the unique factors influencing human performance in space. These factors, called performance shaping factors in HRA terminology, are used in HRA techniques to modify basic human error probabilities in order to capture the context of an analyzed task. Many of the human error modeling techniques were developed within the context of nuclear power plants and therefore the methodologies do not address spaceflight factors such as the effects of microgravity and longer duration missions. This presentation will describe the types of human error risks which have shown up as risk drivers in the Shuttle PRA which may be applicable to commercial space flight. As with other large PRAs of complex machines, human error in the Shuttle PRA proved to be an important contributor (12 percent) to LOCV. An existing HRA technique was adapted for use in the Shuttle PRA, but additional guidance and improvements are needed to make the HRA task in space-related PRAs easier and more accurate. Therefore, this presentation will also outline plans for expanding current HRA methodology to more explicitly cover spaceflight performance shaping factors.

  5. Comment on "Differential sensitivity to human communication in dogs, wolves, and human infants".

    PubMed

    Fiset, Sylvain

    2010-07-09

    Topál et al. (Reports, 4 September 2009, p. 1269) reported that dogs' sensitivity to reading and using human signals contributes to the emergence of a spatial perseveration error (the A-not-B error) for locating objects. Here, I argue that the authors' conclusion was biased by two confounding factors: the use of an atypical A-not-B search task and an inadequate nonsocial condition as a control.

  6. NASA: Model development for human factors interfacing

    NASA Technical Reports Server (NTRS)

    Smith, L. L.

    1984-01-01

    The results of an intensive literature review in the general topics of human error analysis, stress and job performance, and accident and safety analysis revealed no usable techniques or approaches for analyzing human error in ground or space operations tasks. A task review model is described and proposed to be developed in order to reduce the degree of labor intensiveness in ground and space operations tasks. An extensive number of annotated references are provided.

  7. Causes and Prevention of Laparoscopic Bile Duct Injuries

    PubMed Central

    Way, Lawrence W.; Stewart, Lygia; Gantert, Walter; Liu, Kingsway; Lee, Crystine M.; Whang, Karen; Hunter, John G.

    2003-01-01

    Objective To apply human performance concepts in an attempt to understand the causes of and prevent laparoscopic bile duct injury. Summary Background Data Powerful conceptual advances have been made in understanding the nature and limits of human performance. Applying these findings in high-risk activities, such as commercial aviation, has allowed the work environment to be restructured to substantially reduce human error. Methods The authors analyzed 252 laparoscopic bile duct injuries according to the principles of the cognitive science of visual perception, judgment, and human error. The injury distribution was class I, 7%; class II, 22%; class III, 61%; and class IV, 10%. The data included operative radiographs, clinical records, and 22 videotapes of original operations. Results The primary cause of error in 97% of cases was a visual perceptual illusion. Faults in technical skill were present in only 3% of injuries. Knowledge and judgment errors were contributory but not primary. Sixty-four injuries (25%) were recognized at the index operation; the surgeon identified the problem early enough to limit the injury in only 15 (6%). In class III injuries the common duct, erroneously believed to be the cystic duct, was deliberately cut. This stemmed from an illusion of object form due to a specific uncommon configuration of the structures and the heuristic nature (unconscious assumptions) of human visual perception. The videotapes showed the persuasiveness of the illusion, and many operative reports described the operation as routine. Class II injuries resulted from a dissection too close to the common hepatic duct. Fundamentally an illusion, it was contributed to in some instances by working too deep in the triangle of Calot. Conclusions These data show that errors leading to laparoscopic bile duct injuries stem principally from misperception, not errors of skill, knowledge, or judgment. The misperception was so compelling that in most cases the surgeon did not recognize a problem. Even when irregularities were identified, corrective feedback did not occur, which is characteristic of human thinking under firmly held assumptions. These findings illustrate the complexity of human error in surgery while simultaneously providing insights. They demonstrate that automatically attributing technical complications to behavioral factors that rely on the assumption of control is likely to be wrong. Finally, this study shows that there are only a few points within laparoscopic cholecystectomy where the complication-causing errors occur, which suggests that focused training to heighten vigilance might be able to decrease the incidence of bile duct injury. PMID:12677139

  8. Advancing Usability Evaluation through Human Reliability Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronald L. Boring; David I. Gertman

    2005-07-01

    This paper introduces a novel augmentation to the current heuristic usability evaluation methodology. The SPAR-H human reliability analysis method was developed for categorizing human performance in nuclear power plants. Despite the specialized use of SPAR-H for safety critical scenarios, the method also holds promise for use in commercial off-the-shelf software usability evaluations. The SPAR-H method shares task analysis underpinnings with human-computer interaction, and it can be easily adapted to incorporate usability heuristics as performance shaping factors. By assigning probabilistic modifiers to heuristics, it is possible to arrive at the usability error probability (UEP). This UEP is not a literal probabilitymore » of error but nonetheless provides a quantitative basis to heuristic evaluation. When combined with a consequence matrix for usability errors, this method affords ready prioritization of usability issues.« less

  9. A high-resolution line sensor-based photostereometric system for measuring jaw movements in 6 degrees of freedom.

    PubMed

    Hayashi, T; Kurokawa, M; Miyakawa, M; Aizawa, T; Kanaki, A; Saitoh, A; Ishioka, K

    1994-01-01

    Photostereometry has widely been applied to the measurement of mandibular movements in 6 degrees of freedom. In order to improve the accuracy of this measurement, we developed a system utilizing small LEDs mounted on the jaws in redundant numbers and a 5000 pixel linear charge-coupled device (CCD) as a photo-sensor. A total of eight LEDs are mounted on the jaws, in two sets of four, by means of connecting facebows, each weighing approximately 55 g. The position of the LEDs are detected in three-dimensions by two sets of three CCD cameras, located bilaterally. The position and orientation of the mandible are estimated from the positions of all LEDs measured in the sense of least-squares, thereby effectively reducing the measurement errors. The static overall accuracy at all tooth and condylar points was considered to lie within 0.19 and 0.34 mm, respectively, from various accuracy verification tests.

  10. Thermal and optical aspects of glob-top design for phosphor converted white LED light sources

    NASA Astrophysics Data System (ADS)

    Sommer, Christian; Fulmek, Paul; Nicolics, Johann; Schweitzer, Susanne; Nemitz, Wolfgang; Hartmann, Paul; Pachler, Peter; Hoschopf, Hans; Schrank, Franz; Langer, Gregor; Wenzl, Franz P.

    2013-09-01

    For a systematic approach to improve the white light quality of phosphor converted light-emitting diodes (LEDs) for general lighting applications it is imperative to get the individual sources of error for correlated color temperature (CCT) reproducibility and maintenance under control. In this regard, it is of essential importance to understand how geometrical, optical and thermal properties of the color conversion elements (CCE), which typically consist of phosphor particles embedded in a transparent matrix material, affect the constancy of a desired CCT value. In this contribution we use an LED assembly consisting of an LED die mounted on a printed circuit board by chip-on-board technology and a CCE with a glob-top configuration on the top of it as a model system and discuss the impact of the CCE shape and size on CCT constancy with respect to substrate reflectivity and thermal load of the CCEs. From these studies, some general conclusions for improved glob-top design can be drawn.

  11. Forward and correctional OFDM-based visible light positioning

    NASA Astrophysics Data System (ADS)

    Li, Wei; Huang, Zhitong; Zhao, Runmei; He, Peixuan; Ji, Yuefeng

    2017-09-01

    Visible light positioning (VLP) has attracted much attention in both academic and industrial areas due to the extensive deployment of light-emitting diodes (LEDs) as next-generation green lighting. Generally, the coverage of a single LED lamp is limited, so LED arrays are always utilized to achieve uniform illumination within the large-scale indoor environment. However, in such dense LED deployment scenario, the superposition of the light signals becomes an important challenge for accurate VLP. To solve this problem, we propose a forward and correctional orthogonal frequency division multiplexing (OFDM)-based VLP (FCO-VLP) scheme with low complexity in generating and processing of signals. In the first forward procedure of FCO-VLP, an initial position is obtained by the trilateration method based on OFDM-subcarriers. The positioning accuracy will be further improved in the second correctional procedure based on the database of reference points. As demonstrated in our experiments, our approach yields an improved average positioning error of 4.65 cm and an enhanced positioning accuracy by 24.2% compared with trilateration method.

  12. Temporal information processing in short- and long-term memory of patients with schizophrenia.

    PubMed

    Landgraf, Steffen; Steingen, Joerg; Eppert, Yvonne; Niedermeyer, Ulrich; van der Meer, Elke; Krueger, Frank

    2011-01-01

    Cognitive deficits of patients with schizophrenia have been largely recognized as core symptoms of the disorder. One neglected factor that contributes to these deficits is the comprehension of time. In the present study, we assessed temporal information processing and manipulation from short- and long-term memory in 34 patients with chronic schizophrenia and 34 matched healthy controls. On the short-term memory temporal-order reconstruction task, an incidental or intentional learning strategy was deployed. Patients showed worse overall performance than healthy controls. The intentional learning strategy led to dissociable performance improvement in both groups. Whereas healthy controls improved on a performance measure (serial organization), patients improved on an error measure (inappropriate semantic clustering) when using the intentional instead of the incidental learning strategy. On the long-term memory script-generation task, routine and non-routine events of everyday activities (e.g., buying groceries) had to be generated in either chronological or inverted temporal order. Patients were slower than controls at generating events in the chronological routine condition only. They also committed more sequencing and boundary errors in the inverted conditions. The number of irrelevant events was higher in patients in the chronological, non-routine condition. These results suggest that patients with schizophrenia imprecisely access temporal information from short- and long-term memory. In short-term memory, processing of temporal information led to a reduction in errors rather than, as was the case in healthy controls, to an improvement in temporal-order recall. When accessing temporal information from long-term memory, patients were slower and committed more sequencing, boundary, and intrusion errors. Together, these results suggest that time information can be accessed and processed only imprecisely by patients who provide evidence for impaired time comprehension. This could contribute to symptomatic cognitive deficits and strategic inefficiency in schizophrenia.

  13. Autonomous Control Modes and Optimized Path Guidance for Shipboard Landing in High Sea States

    DTIC Science & Technology

    2015-11-16

    a degraded visual environment, workload during the landing task begins to approach the limits of a human pilot’s capability. It is a similarly...Figure 2. Approach Trajectory ±4 ft landing error ±8 ft landing error ±12 ft landing error Flight Path -3000...heave and yaw axes. Figure 5. Open loop system generation ±4 ft landing error ±8 ft landing error ±12 ft landing error -10 -8 -6 -4 -2 0 2 4

  14. [Responsibility due to medication errors in France: a study based on SHAM insurance data].

    PubMed

    Theissen, A; Orban, J-C; Fuz, F; Guerin, J-P; Flavin, P; Albertini, S; Maricic, S; Saquet, D; Niccolai, P

    2015-03-01

    The safe medication practices at the hospital constitute a major public health problem. Drug supply chain is a complex process, potentially source of errors and damages for the patient. SHAM insurances are the biggest French provider of medical liability insurances and a relevant source of data on the health care complications. The main objective of the study was to analyze the type and cause of medication errors declared to SHAM and having led to a conviction by a court. We did a retrospective study on insurance claims provided by SHAM insurances with a medication error and leading to a condemnation over a 6-year period (between 2005 and 2010). Thirty-one cases were analysed, 21 for scheduled activity and 10 for emergency activity. Consequences of claims were mostly serious (12 deaths, 14 serious complications, 5 simple complications). The types of medication errors were a drug monitoring error (11 cases), an administration error (5 cases), an overdose (6 cases), an allergy (4 cases), a contraindication (3 cases) and an omission (2 cases). Intravenous route of administration was involved in 19 of 31 cases (61%). The causes identified by the court expert were an error related to service organization (11), an error related to medical practice (11) or nursing practice (13). Only one claim was due to the hospital pharmacy. The claim related to drug supply chain is infrequent but potentially serious. These data should help strengthen quality approach in risk management. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  15. The high accuracy data processing system of laser interferometry signals based on MSP430

    NASA Astrophysics Data System (ADS)

    Qi, Yong-yue; Lin, Yu-chi; Zhao, Mei-rong

    2009-07-01

    Generally speaking there are two orthogonal signals used in single-frequency laser interferometer for differentiating direction and electronic subdivision. However there usually exist three errors with the interferential signals: zero offsets error, unequal amplitude error and quadrature phase shift error. These three errors have a serious impact on subdivision precision. Based on Heydemann error compensation algorithm, it is proposed to achieve compensation of the three errors. Due to complicated operation of the Heydemann mode, a improved arithmetic is advanced to decrease the calculating time effectively in accordance with the special characteristic that only one item of data will be changed in each fitting algorithm operation. Then a real-time and dynamic compensatory circuit is designed. Taking microchip MSP430 as the core of hardware system, two input signals with the three errors are turned into digital quantity by the AD7862. After data processing in line with improved arithmetic, two ideal signals without errors are output by the AD7225. At the same time two original signals are turned into relevant square wave and imported to the differentiating direction circuit. The impulse exported from the distinguishing direction circuit is counted by the timer of the microchip. According to the number of the pulse and the soft subdivision the final result is showed by LED. The arithmetic and the circuit are adopted to test the capability of a laser interferometer with 8 times optical path difference and the measuring accuracy of 12-14nm is achieved.

  16. The Accuracy of GBM GRB Localizations

    NASA Astrophysics Data System (ADS)

    Briggs, Michael Stephen; Connaughton, V.; Meegan, C.; Hurley, K.

    2010-03-01

    We report an study of the accuracy of GBM GRB localizations, analyzing three types of localizations: those produced automatically by the GBM Flight Software on board GBM, those produced automatically with ground software in near real time, and localizations produced with human guidance. The two types of automatic locations are distributed in near real-time via GCN Notices; the human-guided locations are distributed on timescale of many minutes or hours using GCN Circulars. This work uses a Bayesian analysis that models the distribution of the GBM total location error by comparing GBM locations to more accurate locations obtained with other instruments. Reference locations are obtained from Swift, Super-AGILE, the LAT, and with the IPN. We model the GBM total location errors as having systematic errors in addition to the statistical errors and use the Bayesian analysis to constrain the systematic errors.

  17. Forest statistics for Northwest Florida, 1987

    Treesearch

    Mark J. Brown

    1987-01-01

    The Forest Inventory and Analysis (Forest Survey) Research Work Unit at the Southeastern Forest Experiment Station recently conducted a review of its data processing procedures. During this process, a computer error was discovered which led to inflated estimates of annual removals, net annual growth, and annual mortality for the 1970-1980 remeasurement period in...

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fisk, William J.; Sullivan, Douglas

    This pilot scale study evaluated the counting accuracy of two people counting systems that could be used in demand controlled ventilation systems to provide control signals for modulating outdoor air ventilation rates. The evaluations included controlled challenges of the people counting systems using pre-planned movements of occupants through doorways and evaluations of counting accuracies when naive occupants (i.e., occupants unaware of the counting systems) passed through the entrance doors of the building or room. The two people counting systems had high counting accuracy accuracies, with errors typically less than 10percent, for typical non-demanding counting events. However, counting errors were highmore » in some highly challenging situations, such as multiple people passing simultaneously through a door. Counting errors, for at least one system, can be very high if people stand in the field of view of the sensor. Both counting system have limitations and would need to be used only at appropriate sites and where the demanding situations that led to counting errors were rare.« less

  19. Improvement on Timing Accuracy of LIDAR for Remote Sensing

    NASA Astrophysics Data System (ADS)

    Zhou, G.; Huang, W.; Zhou, X.; Huang, Y.; He, C.; Li, X.; Zhang, L.

    2018-05-01

    The traditional timing discrimination technique for laser rangefinding in remote sensing, which is lower in measurement performance and also has a larger error, has been unable to meet the high precision measurement and high definition lidar image. To solve this problem, an improvement of timing accuracy based on the improved leading-edge timing discrimination (LED) is proposed. Firstly, the method enables the corresponding timing point of the same threshold to move forward with the multiple amplifying of the received signal. Then, timing information is sampled, and fitted the timing points through algorithms in MATLAB software. Finally, the minimum timing error is calculated by the fitting function. Thereby, the timing error of the received signal from the lidar is compressed and the lidar data quality is improved. Experiments show that timing error can be significantly reduced by the multiple amplifying of the received signal and the algorithm of fitting the parameters, and a timing accuracy of 4.63 ps is achieved.

  20. A comprehensive comparable study of the physiological properties of four microalgal species under different light wavelength conditions.

    PubMed

    Zhong, Yu; Jin, Peng; Cheng, Jay J

    2018-05-19

    Microalgae treated with blue light have potential for production of human nutrition supplement and biofuel due to their higher biomass productivity and favorable fatty acid composition. Chlorella vulgaris, Chlorella pyrenoidosa, Scenedesmus quadricauda and Scenedesmus obliquus are representative green microalgae which are widely reported for algal production. In this study, we provide a systematic investigation of the biomass productivity, photosynthetic pigments, chlorophyll fluorescence and fatty acid content of the four green microalgae. The strains were grown in two primary monochromatic light wavelengths [red and blue LEDs (light emitting diode)], and in white LED conditions, respectively. Among them, blue LED light was determined as the best light for growth rate, followed by red LED and white LED. The chlorophyll generation was more sensitive to the monochromatic blue light. The polyunsaturated fatty acids (PUFAs) such as α-linolenic acid (18:3), which were perfect for human nutrition supplementation, showed high concentrations in these algae strains under blue LED. Collectively, the results indicate that the blue LED is suitable for various food, feed, and algal biofuel productions due to both biomass and fatty acid productivity.

  1. Simplified spectraphotometric method for the detection of red blood cell agglutination.

    PubMed

    Ramasubramanian, Melur; Anthony, Steven; Lambert, Jeremy

    2008-08-01

    Human error is the most significant factor attributed to incompatible blood transfusions. A spectrophotometric approach to blood typing has been developed by examining the spectral slopes of dilute red blood cell (RBC) suspensions in saline, in the presence and absence of various antibodies, offering a technique for the quantitative determination of agglutination intensity [Transfusion39, 1051, 1999TRANAT0041-113210.1046/j.1537-2995.1999.39101051.x]. We offer direct theoretical prediction of the observed change in slope in the 660-1000 nm range through the use of the T-matrix approach and Lorenz-Mie theory for light scattering by dilute RBC suspensions. Following a numerical simulation using the T-matrix code, we present a simplified sensing method for detecting agglutination. The sensor design has been prototyped, fully characterized, and evaluated through a complete set of tests with over 60 RBC samples and compared with the full spectrophotometric method. The LED and photodiode pairs are found to successfully reproduce the spectroscopic determination of red blood cell agglutination.

  2. Recent insights into the Smith-Lemli-Opitz syndrome.

    PubMed

    Yu, H; Patel, S B

    2005-11-01

    Recent insights into the Smith-Lemli-Opitz syndrome. The Smith-Lemli-Opitz syndrome (SLOS) is an autosomal recessive multiple congenital anomaly/mental retardation disorder caused by an inborn error of post-squalene cholesterol biosynthesis. Deficient cholesterol synthesis in SLOS is caused by inherited mutations of 3beta-hydroxysterol-Delta7 reductase gene (DHCR7). DHCR7 deficiency impairs both cholesterol and desmosterol production, resulting in elevated 7DHC/8DHC levels, typically decreased cholesterol levels and, importantly, developmental dysmorphology. The discovery of SLOS has led to new questions regarding the role of the cholesterol biosynthesis pathway in human development. To date, a total of 121 different mutations have been identified in over 250 patients with SLOS who represent a continuum of clinical severity. Two genetic mouse models have been generated which recapitulate some of the developmental abnormalities of SLOS and have been useful in elucidating the pathogenesis. This mini review summarizes the recent insights into SLOS genetics, pathophysiology and potential therapeutic approaches for the treatment of SLOS.

  3. By the sound of it. An ERP investigation of human action sound processing in 7-month-old infants

    PubMed Central

    Geangu, Elena; Quadrelli, Ermanno; Lewis, James W.; Macchi Cassia, Viola; Turati, Chiara

    2015-01-01

    Recent evidence suggests that human adults perceive human action sounds as a distinct category from human vocalizations, environmental, and mechanical sounds, activating different neural networks (Engel et al., 2009; Lewis et al., 2011). Yet, little is known about the development of such specialization. Using event-related potentials (ERP), this study investigated neural correlates of 7-month-olds’ processing of human action (HA) sounds in comparison to human vocalizations (HV), environmental (ENV), and mechanical (MEC) sounds. Relative to the other categories, HA sounds led to increased positive amplitudes between 470 and 570 ms post-stimulus onset at left anterior temporal locations, while HV led to increased negative amplitudes at the more posterior temporal locations in both hemispheres. Collectively, human produced sounds (HA + HV) led to significantly different response profiles compared to non-living sound sources (ENV + MEC) at parietal and frontal locations in both hemispheres. Overall, by 7 months of age human action sounds are being differentially processed in the brain, consistent with a dichotomy for processing living versus non-living things. This provides novel evidence regarding the typical categorical processing of socially relevant sounds. PMID:25732377

  4. THERP and HEART integrated methodology for human error assessment

    NASA Astrophysics Data System (ADS)

    Castiglia, Francesco; Giardina, Mariarosa; Tomarchio, Elio

    2015-11-01

    THERP and HEART integrated methodology is proposed to investigate accident scenarios that involve operator errors during high-dose-rate (HDR) treatments. The new approach has been modified on the basis of fuzzy set concept with the aim of prioritizing an exhaustive list of erroneous tasks that can lead to patient radiological overexposures. The results allow for the identification of human errors that are necessary to achieve a better understanding of health hazards in the radiotherapy treatment process, so that it can be properly monitored and appropriately managed.

  5. Patient safety in otolaryngology: a descriptive review.

    PubMed

    Danino, Julian; Muzaffar, Jameel; Metcalfe, Chris; Coulson, Chris

    2017-03-01

    Human evaluation and judgement may include errors that can have disastrous results. Within medicine and healthcare there has been slow progress towards major changes in safety. Healthcare lags behind other specialised industries, such as aviation and nuclear power, where there have been significant improvements in overall safety, especially in reducing risk of errors. Following several high profile cases in the USA during the 1990s, a report titled "To Err Is Human: Building a Safer Health System" was published. The report extrapolated that in the USA approximately 50,000 to 100,000 patients may die each year as a result of medical errors. Traditionally otolaryngology has always been regarded as a "safe specialty". A study in the USA in 2004 inferred that there may be 2600 cases of major morbidity and 165 deaths within the specialty. MEDLINE via PubMed interface was searched for English language articles published between 2000 and 2012. Each combined two or three of the keywords noted earlier. Limitations are related to several generic topics within patient safety in otolaryngology. Other areas covered have been current relevant topics due to recent interest or new advances in technology. There has been a heightened awareness within the healthcare community of patient safety; it has become a major priority. Focus has shifted from apportioning blame to prevention of the errors and implementation of patient safety mechanisms in healthcare delivery. Type of Errors can be divided into errors due to action and errors due to knowledge or planning. In healthcare there are several factors that may influence adverse events and patient safety. Although technology may improve patient safety, it also introduces new sources of error. The ability to work with people allows for the increase in safety netting. Team working has been shown to have a beneficial effect on patient safety. Any field of work involving human decision-making will always have a risk of error. Within Otolaryngology, although patient safety has evolved along similar themes as other surgical specialties; there are several specific high-risk areas. Medical error is a common problem and its human cost is of immense importance. Steps to reduce such errors require the identification of high-risk practice within a complex healthcare system. The commitment to patient safety and quality improvement in medicine depend on personal responsibility and professional accountability.

  6. Error identification and recovery by student nurses using human patient simulation: opportunity to improve patient safety.

    PubMed

    Henneman, Elizabeth A; Roche, Joan P; Fisher, Donald L; Cunningham, Helene; Reilly, Cheryl A; Nathanson, Brian H; Henneman, Philip L

    2010-02-01

    This study examined types of errors that occurred or were recovered in a simulated environment by student nurses. Errors occurred in all four rule-based error categories, and all students committed at least one error. The most frequent errors occurred in the verification category. Another common error was related to physician interactions. The least common errors were related to coordinating information with the patient and family. Our finding that 100% of student subjects committed rule-based errors is cause for concern. To decrease errors and improve safe clinical practice, nurse educators must identify effective strategies that students can use to improve patient surveillance. Copyright 2010 Elsevier Inc. All rights reserved.

  7. Advanced automated glass cockpit certification: Being wary of human factors

    NASA Technical Reports Server (NTRS)

    Amalberti, Rene; Wilbaux, Florence

    1994-01-01

    This paper presents some facets of the French experience with human factors in the process of certification of advanced automated cockpits. Three types of difficulties are described: first, the difficulties concerning the hotly debated concept of human error and its non-linear relationship to risk of accident; a typology of errors to be taken into account in the certification process is put forward to respond to this issue. Next, the difficulties connected to the basically gradual and evolving nature of pilot expertise on a given type of aircraft, which contrasts with the immediate and definitive style of certifying systems. The last difficulties to be considered are those related to the goals of certification itself on these new aircraft and the status of findings from human factor analyses (in particular, what should be done with disappointing results, how much can the changes induced by human factors investigation economically affect aircraft design, how many errors do we need to accumulate before we revise the system, what should be remedied when human factor problems are discovered at the certification stage: the machine? pilot training? the rules? or everything?). The growth of advanced-automated glass cockpits has forced the international aeronautical community to pay more attention to human factors during the design phase, the certification phase and pilot training. The recent creation of a human factor desk at the DGAC-SFACT (Official French services) is a direct consequence of this. The paper is divided into three parts. Part one debates human error and its relationship with system design and accident risk. Part two describes difficulties connected to the basically gradual and evolving nature of pilot expertise on a given type of aircraft, which contrasts with the immediate and definitive style of certifying systems. Part three focuses on concrete outcomes of human factors for certification purposes.

  8. Normal accidents: human error and medical equipment design.

    PubMed

    Dain, Steven

    2002-01-01

    High-risk systems, which are typical of our technologically complex era, include not just nuclear power plants but also hospitals, anesthesia systems, and the practice of medicine and perfusion. In high-risk systems, no matter how effective safety devices are, some types of accidents are inevitable because the system's complexity leads to multiple and unexpected interactions. It is important for healthcare providers to apply a risk assessment and management process to decisions involving new equipment and procedures or staffing matters in order to minimize the residual risks of latent errors, which are amenable to correction because of the large window of opportunity for their detection. This article provides an introduction to basic risk management and error theory principles and examines ways in which they can be applied to reduce and mitigate the inevitable human errors that accompany high-risk systems. The article also discusses "human factor engineering" (HFE), the process which is used to design equipment/ human interfaces in order to mitigate design errors. The HFE process involves interaction between designers and endusers to produce a series of continuous refinements that are incorporated into the final product. The article also examines common design problems encountered in the operating room that may predispose operators to commit errors resulting in harm to the patient. While recognizing that errors and accidents are unavoidable, organizations that function within a high-risk system must adopt a "safety culture" that anticipates problems and acts aggressively through an anonymous, "blameless" reporting mechanism to resolve them. We must continuously examine and improve the design of equipment and procedures, personnel, supplies and materials, and the environment in which we work to reduce error and minimize its effects. Healthcare providers must take a leading role in the day-to-day management of the "Perioperative System" and be a role model in promoting a culture of safety in their organizations.

  9. Using Healthcare Failure Mode and Effect Analysis to reduce medication errors in the process of drug prescription, validation and dispensing in hospitalised patients.

    PubMed

    Vélez-Díaz-Pallarés, Manuel; Delgado-Silveira, Eva; Carretero-Accame, María Emilia; Bermejo-Vicedo, Teresa

    2013-01-01

    To identify actions to reduce medication errors in the process of drug prescription, validation and dispensing, and to evaluate the impact of their implementation. A Health Care Failure Mode and Effect Analysis (HFMEA) was supported by a before-and-after medication error study to measure the actual impact on error rate after the implementation of corrective actions in the process of drug prescription, validation and dispensing in wards equipped with computerised physician order entry (CPOE) and unit-dose distribution system (788 beds out of 1080) in a Spanish university hospital. The error study was carried out by two observers who reviewed medication orders on a daily basis to register prescription errors by physicians and validation errors by pharmacists. Drugs dispensed in the unit-dose trolleys were reviewed for dispensing errors. Error rates were expressed as the number of errors for each process divided by the total opportunities for error in that process times 100. A reduction in prescription errors was achieved by providing training for prescribers on CPOE, updating prescription procedures, improving clinical decision support and automating the software connection to the hospital census (relative risk reduction (RRR), 22.0%; 95% CI 12.1% to 31.8%). Validation errors were reduced after optimising time spent in educating pharmacy residents on patient safety, developing standardised validation procedures and improving aspects of the software's database (RRR, 19.4%; 95% CI 2.3% to 36.5%). Two actions reduced dispensing errors: reorganising the process of filling trolleys and drawing up a protocol for drug pharmacy checking before delivery (RRR, 38.5%; 95% CI 14.1% to 62.9%). HFMEA facilitated the identification of actions aimed at reducing medication errors in a healthcare setting, as the implementation of several of these led to a reduction in errors in the process of drug prescription, validation and dispensing.

  10. Note: An online testing method for lifetime projection of high power light-emitting diode under accelerated reliability test.

    PubMed

    Chen, Qi; Chen, Quan; Luo, Xiaobing

    2014-09-01

    In recent years, due to the fast development of high power light-emitting diode (LED), its lifetime prediction and assessment have become a crucial issue. Although the in situ measurement has been widely used for reliability testing in laser diode community, it has not been applied commonly in LED community. In this paper, an online testing method for LED life projection under accelerated reliability test was proposed and the prototype was built. The optical parametric data were collected. The systematic error and the measuring uncertainty were calculated to be within 0.2% and within 2%, respectively. With this online testing method, experimental data can be acquired continuously and sufficient amount of data can be gathered. Thus, the projection fitting accuracy can be improved (r(2) = 0.954) and testing duration can be shortened.

  11. The introduction of an acute physiological support service for surgical patients is an effective error reduction strategy.

    PubMed

    Clarke, D L; Kong, V Y; Naidoo, L C; Furlong, H; Aldous, C

    2013-01-01

    Acute surgical patients are particularly vulnerable to human error. The Acute Physiological Support Team (APST) was created with the twin objectives of identifying high-risk acute surgical patients in the general wards and reducing both the incidence of error and impact of error on these patients. A number of error taxonomies were used to understand the causes of human error and a simple risk stratification system was adopted to identify patients who are particularly at risk of error. During the period November 2012-January 2013 a total of 101 surgical patients were cared for by the APST at Edendale Hospital. The average age was forty years. There were 36 females and 65 males. There were 66 general surgical patients and 35 trauma patients. Fifty-six patients were referred on the day of their admission. The average length of stay in the APST was four days. Eleven patients were haemo-dynamically unstable on presentation and twelve were clinically septic. The reasons for referral were sepsis,(4) respiratory distress,(3) acute kidney injury AKI (38), post-operative monitoring (39), pancreatitis,(3) ICU down-referral,(7) hypoxia,(5) low GCS,(1) coagulopathy.(1) The mortality rate was 13%. A total of thirty-six patients experienced 56 errors. A total of 143 interventions were initiated by the APST. These included institution or adjustment of intravenous fluids (101), blood transfusion,(12) antibiotics,(9) the management of neutropenic sepsis,(1) central line insertion,(3) optimization of oxygen therapy,(7) correction of electrolyte abnormality,(8) correction of coagulopathy.(2) CONCLUSION: Our intervention combined current taxonomies of error with a simple risk stratification system and is a variant of the defence in depth strategy of error reduction. We effectively identified and corrected a significant number of human errors in high-risk acute surgical patients. This audit has helped understand the common sources of error in the general surgical wards and will inform on-going error reduction initiatives. Copyright © 2013 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.

  12. Insufficient Hartree–Fock Exchange in Hybrid DFT Functionals Produces Bent Alkynyl Radical Structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oyeyemi, Victor B.; Keith, John A.; Pavone, Michele

    2012-01-11

    Density functional theory (DFT) is often used to determine the electronic and geometric structures of molecules. While studying alkynyl radicals, we discovered that DFT exchange-correlation (XC) functionals containing less than ~22% Hartree–Fock (HF) exchange led to qualitatively different structures than those predicted from ab initio HF and post-HF calculations or DFT XCs containing 25% or more HF exchange. We attribute this discrepancy to rehybridization at the radical center due to electron delocalization across the triple bonds of the alkynyl groups, which itself is an artifact of self-interaction and delocalization errors. Inclusion of sufficient exact exchange reduces these errors and suppressesmore » this erroneous delocalization; we find that a threshold amount is needed for accurate structure determinations. Finally, below this threshold, significant errors in predicted alkyne thermochemistry emerge as a consequence.« less

  13. Overview of the TOPEX/Poseidon Platform Harvest Verification Experiment

    NASA Technical Reports Server (NTRS)

    Morris, Charles S.; DiNardo, Steven J.; Christensen, Edward J.

    1995-01-01

    An overview is given of the in situ measurement system installed on Texaco's Platform Harvest for verification of the sea level measurement from the TOPEX/Poseidon satellite. The prelaunch error budget suggested that the total root mean square (RMS) error due to measurements made at this verification site would be less than 4 cm. The actual error budget for the verification site is within these original specifications. However, evaluation of the sea level data from three measurement systems at the platform has resulted in unexpectedly large differences between the systems. Comparison of the sea level measurements from the different tide gauge systems has led to a better understanding of the problems of measuring sea level in relatively deep ocean. As of May 1994, the Platform Harvest verification site has successfully supported 60 TOPEX/Poseidon overflights.

  14. A framework for human-hydrologic system model development integrating hydrology and water management: application to the Cutzamala water system in Mexico

    NASA Astrophysics Data System (ADS)

    Wi, S.; Freeman, S.; Brown, C.

    2017-12-01

    This study presents a general approach to developing computational models of human-hydrologic systems where human modification of hydrologic surface processes are significant or dominant. A river basin system is represented by a network of human-hydrologic response units (HHRUs) identified based on locations where river regulations happen (e.g., reservoir operation and diversions). Natural and human processes in HHRUs are simulated in a holistic framework that integrates component models representing rainfall-runoff, river routing, reservoir operation, flow diversion and water use processes. We illustrate the approach in a case study of the Cutzamala water system (CWS) in Mexico, a complex inter-basin water transfer system supplying the Mexico City Metropolitan Area (MCMA). The human-hydrologic system model for CWS (CUTZSIM) is evaluated in terms of streamflow and reservoir storages measured across the CWS and to water supplied for MCMA. The CUTZSIM improves the representation of hydrology and river-operation interaction and, in so doing, advances evaluation of system-wide water management consequences under altered climatic and demand regimes. The integrated modeling framework enables evaluation and simulation of model errors throughout the river basin, including errors in representation of the human component processes. Heretofore, model error evaluation, predictive error intervals and the resultant improved understanding have been limited to hydrologic processes. The general framework represents an initial step towards fuller understanding and prediction of the many and varied processes that determine the hydrologic fluxes and state variables in real river basins.

  15. A Foundation for Systems Anthropometry: Lumbar/Pelvic Kinematics

    DTIC Science & Technology

    1983-02-01

    caused by human error in positioning the cursor of the digitizing board and inaccuracy of the digitizer. (Human error is approximately + .02 an and...Milne, J.S. and Lauder, I.J. 1974. "Age Effects in Kyphosis and Lordosis in Adults." Ann. Hum. Biol. 1(3):327-337. Mchr, G.C., Brinkley, J.W., Kazarian

  16. Comment on "Differential sensitivity to human communication in dogs, wolves, and human infants".

    PubMed

    Marshall-Pescini, S; Passalacqua, C; Valsecchi, P; Prato-Previde, E

    2010-07-09

    Topál et al. (Reports, 4 September 2009, p. 1269) showed that dogs, like infants but unlike wolves, make perseverative search errors that can be explained by the use of ostensive cues from the experimenter. We suggest that a simpler learning process, local enhancement, can account for errors made by dogs.

  17. Review of U.S. Army Unmanned Aerial Systems Accident Reports: Analysis of Human Error Contributions

    DTIC Science & Technology

    2018-03-20

    USAARL Report No. 2018-08 Review of U.S. Army Unmanned Aerial Systems Accident Reports: Analysis of Human Error Contributions By Kathryn A...3 Statistical Analysis Approach ..............................................................................................3 Results...1 Introduction The success of unmanned aerial systems (UAS) operations relies upon a variety of factors, including, but not limited to

  18. Impact of human error on lumber yield in rough mills

    Treesearch

    Urs Buehlmann; R. Edward Thomas; R. Edward Thomas

    2002-01-01

    Rough sawn, kiln-dried lumber contains characteristics such as knots and bark pockets that are considered by most people to be defects. When using boards to produce furniture components, these defects are removed to produce clear, defect-free parts. Currently, human operators identify and locate the unusable board areas containing defects. Errors in determining a...

  19. The Impact of Incident Disclosure Behaviors on Medical Malpractice Claims.

    PubMed

    Giraldo, Priscila; Sato, Luke; Castells, Xavier

    2017-06-30

    To provide preliminary estimates of incident disclosure behaviors on medical malpractice claims. We conducted a descriptive analysis of data on medical malpractice claims obtained from the Controlled Risk Insurance Company and Risk Management Foundation of Harvard Medical Institutions (Cambridge, Massachusetts) between 2012 and 2013 (n = 434). The characteristics of disclosure and apology after medical errors were analyzed. Of 434 medical malpractice claims, 4.6% (n = 20) medical errors had been disclosed to the patient at the time of the error, and 5.9% (n = 26) had been followed by disclosure and apology. The highest number of disclosed injuries occurred in 2011 (23.9%; n = 11) and 2012 (34.8%; n = 16). There was no incremental increase during the financial years studied (2012-2013). The mean age of informed patients was 52.96 years, 58.7 % of the patients were female, and 52.2% were inpatients. Of the disclosed errors, 26.1% led to an adverse reaction, and 17.4% were fatal. The cause of disclosed medical error was improper surgical performance in 17.4% (95% confidence interval, 6.4-28.4). Disclosed medical errors were classified as medium severity in 67.4%. No apology statement was issued in 54.5% of medical errors classified as high severity. At the health-care centers studied, when a claim followed a medical error, providers infrequently disclosed medical errors or apologized to the patient or relatives. Most of the medical errors followed by disclosure and apology were classified as being of high and medium severity. No changes were detected in the volume of lawsuits over time.

  20. Action errors, error management, and learning in organizations.

    PubMed

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  1. Uncorrected refractive errors.

    PubMed

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  2. Intelligent OCR Processing.

    ERIC Educational Resources Information Center

    Sun, Wei; And Others

    1992-01-01

    Identifies types and distributions of errors in text produced by optical character recognition (OCR) and proposes a process using machine learning techniques to recognize and correct errors in OCR texts. Results of experiments indicating that this strategy can reduce human interaction required for error correction are reported. (25 references)…

  3. Immortalization of normal human mammary epithelial cells in two steps by direct targeting of senescence barriers does not require gross genomic alterations

    DOE PAGES

    Garbe, James C.; Vrba, Lukas; Sputova, Klara; ...

    2014-10-29

    Telomerase reactivation and immortalization are critical for human carcinoma progression. However, little is known about the mechanisms controlling this crucial step, due in part to the paucity of experimentally tractable model systems that can examine human epithelial cell immortalization as it might occur in vivo. We achieved efficient non-clonal immortalization of normal human mammary epithelial cells (HMEC) by directly targeting the 2 main senescence barriers encountered by cultured HMEC. The stress-associated stasis barrier was bypassed using shRNA to p16INK4; replicative senescence due to critically shortened telomeres was bypassed in post-stasis HMEC by c-MYC transduction. Thus, 2 pathologically relevant oncogenic agentsmore » are sufficient to immortally transform normal HMEC. The resultant non-clonal immortalized lines exhibited normal karyotypes. Most human carcinomas contain genomically unstable cells, with widespread instability first observed in vivo in pre-malignant stages; in vitro, instability is seen as finite cells with critically shortened telomeres approach replicative senescence. Our results support our hypotheses that: (1) telomere-dysfunction induced genomic instability in pre-malignant finite cells may generate the errors required for telomerase reactivation and immortalization, as well as many additional “passenger” errors carried forward into resulting carcinomas; (2) genomic instability during cancer progression is needed to generate errors that overcome tumor suppressive barriers, but not required per se; bypassing the senescence barriers by direct targeting eliminated a need for genomic errors to generate immortalization. Achieving efficient HMEC immortalization, in the absence of “passenger” genomic errors, should facilitate examination of telomerase regulation during human carcinoma progression, and exploration of agents that could prevent immortalization.« less

  4. Recognizing and managing errors of cognitive underspecification.

    PubMed

    Duthie, Elizabeth A

    2014-03-01

    James Reason describes cognitive underspecification as incomplete communication that creates a knowledge gap. Errors occur when an information mismatch occurs in bridging that gap with a resulting lack of shared mental models during the communication process. There is a paucity of studies in health care examining this cognitive error and the role it plays in patient harm. The goal of the following case analyses is to facilitate accurate recognition, identify how it contributes to patient harm, and suggest appropriate management strategies. Reason's human error theory is applied in case analyses of errors of cognitive underspecification. Sidney Dekker's theory of human incident investigation is applied to event investigation to facilitate identification of this little recognized error. Contributory factors leading to errors of cognitive underspecification include workload demands, interruptions, inexperienced practitioners, and lack of a shared mental model. Detecting errors of cognitive underspecification relies on blame-free listening and timely incident investigation. Strategies for interception include two-way interactive communication, standardization of communication processes, and technological support to ensure timely access to documented clinical information. Although errors of cognitive underspecification arise at the sharp end with the care provider, effective management is dependent upon system redesign that mitigates the latent contributory factors. Cognitive underspecification is ubiquitous whenever communication occurs. Accurate identification is essential if effective system redesign is to occur.

  5. Comprehensive analysis of a medication dosing error related to CPOE.

    PubMed

    Horsky, Jan; Kuperman, Gilad J; Patel, Vimla L

    2005-01-01

    This case study of a serious medication error demonstrates the necessity of a comprehensive methodology for the analysis of failures in interaction between humans and information systems. The authors used a novel approach to analyze a dosing error related to computer-based ordering of potassium chloride (KCl). The method included a chronological reconstruction of events and their interdependencies from provider order entry usage logs, semistructured interviews with involved clinicians, and interface usability inspection of the ordering system. Information collected from all sources was compared and evaluated to understand how the error evolved and propagated through the system. In this case, the error was the product of faults in interaction among human and system agents that methods limited in scope to their distinct analytical domains would not identify. The authors characterized errors in several converging aspects of the drug ordering process: confusing on-screen laboratory results review, system usability difficulties, user training problems, and suboptimal clinical system safeguards that all contributed to a serious dosing error. The results of the authors' analysis were used to formulate specific recommendations for interface layout and functionality modifications, suggest new user alerts, propose changes to user training, and address error-prone steps of the KCl ordering process to reduce the risk of future medication dosing errors.

  6. 78 FR 17155 - Standards for the Growing, Harvesting, Packing, and Holding of Produce for Human Consumption...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-20

    ...The Food and Drug Administration (FDA or we) is correcting the preamble to a proposed rule that published in the Federal Register of January 16, 2013. That proposed rule would establish science-based minimum standards for the safe growing, harvesting, packing, and holding of produce, meaning fruits and vegetables grown for human consumption. FDA proposed these standards as part of our implementation of the FDA Food Safety Modernization Act. The document published with several technical errors, including some errors in cross references, as well as several errors in reference numbers cited throughout the document. This document corrects those errors. We are also placing a corrected copy of the proposed rule in the docket.

  7. [Hyperopic Laser-in-situ-Keratomileusis after trifocal intraocular lens implantation : Aberration-free femto-Laser-in-situ-Keratomileusis treatment after implantation of a diffractive, multifocal, toric intraocular lens-case analysis].

    PubMed

    Hemkeppler, E; Böhm, M; Kohnen, T

    2018-05-29

    A 52-year-old highly myopic female patient was implanted with a multifocal, diffractive, toric intraocular lens because of the wish to be independent of eyeglasses. Despite high-quality, extensive preoperative examinations, a hyperopic refractive error remained postoperatively, which led to the patient's dissatisfaction. This error was treated with Laser-in-situ-Keratomileusis (LASIK). After corneal LASIK treatment and implantation of a diffractive toric multifocal intraocular lens the patient showed a good postoperative visual result without optical phenomena.

  8. Patient identification using a near-infrared laser scanner

    NASA Astrophysics Data System (ADS)

    Manit, Jirapong; Bremer, Christina; Schweikard, Achim; Ernst, Floris

    2017-03-01

    We propose a new biometric approach where the tissue thickness of a person's forehead is used as a biometric feature. Given that the spatial registration of two 3D laser scans of the same human face usually produces a low error value, the principle of point cloud registration and its error metric can be applied to human classification techniques. However, by only considering the spatial error, it is not possible to reliably verify a person's identity. We propose to use a novel near-infrared laser-based head tracking system to determine an additional feature, the tissue thickness, and include this in the error metric. Using MRI as a ground truth, data from the foreheads of 30 subjects was collected from which a 4D reference point cloud was created for each subject. The measurements from the near-infrared system were registered with all reference point clouds using the ICP algorithm. Afterwards, the spatial and tissue thickness errors were extracted, forming a 2D feature space. For all subjects, the lowest feature distance resulted from the registration of a measurement and the reference point cloud of the same person. The combined registration error features yielded two clusters in the feature space, one from the same subject and another from the other subjects. When only the tissue thickness error was considered, these clusters were less distinct but still present. These findings could help to raise safety standards for head and neck cancer patients and lays the foundation for a future human identification technique.

  9. Human factors evaluation of remote afterloading brachytherapy. Volume 2, Function and task analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Callan, J.R.; Gwynne, J.W. III; Kelly, T.T.

    1995-05-01

    A human factors project on the use of nuclear by-product material to treat cancer using remotely operated afterloaders was undertaken by the Nuclear Regulatory Commission. The purpose of the project was to identify factors that contribute to human error in the system for remote afterloading brachytherapy (RAB). This report documents the findings from the first phase of the project, which involved an extensive function and task analysis of RAB. This analysis identified the functions and tasks in RAB, made preliminary estimates of the likelihood of human error in each task, and determined the skills needed to perform each RAB task.more » The findings of the function and task analysis served as the foundation for the remainder of the project, which evaluated four major aspects of the RAB system linked to human error: human-system interfaces; procedures and practices; training and qualifications of RAB staff; and organizational practices and policies. At its completion, the project identified and prioritized areas for recommended NRC and industry attention based on all of the evaluations and analyses.« less

  10. Prediction error induced motor contagions in human behaviors.

    PubMed

    Ikegami, Tsuyoshi; Ganesh, Gowrishankar; Takeuchi, Tatsuya; Nakamoto, Hiroki

    2018-05-29

    Motor contagions refer to implicit effects on one's actions induced by observed actions. Motor contagions are believed to be induced simply by action observation and cause an observer's action to become similar to the action observed. In contrast, here we report a new motor contagion that is induced only when the observation is accompanied by prediction errors - differences between actions one observes and those he/she predicts or expects. In two experiments, one on whole-body baseball pitching and another on simple arm reaching, we show that the observation of the same action induces distinct motor contagions, depending on whether prediction errors are present or not. In the absence of prediction errors, as in previous reports, participants' actions changed to become similar to the observed action, while in the presence of prediction errors, their actions changed to diverge away from it, suggesting distinct effects of action observation and action prediction on human actions. © 2018, Ikegami et al.

  11. Light emitting diode excitation emission matrix fluorescence spectroscopy.

    PubMed

    Hart, Sean J; JiJi, Renée D

    2002-12-01

    An excitation emission matrix (EEM) fluorescence instrument has been developed using a linear array of light emitting diodes (LED). The wavelengths covered extend from the upper UV through the visible spectrum: 370-640 nm. Using an LED array to excite fluorescence emission at multiple excitation wavelengths is a low-cost alternative to an expensive high power lamp and imaging spectrograph. The LED-EEM system is a departure from other EEM spectroscopy systems in that LEDs often have broad excitation ranges which may overlap with neighboring channels. The LED array can be considered a hybrid between a spectroscopic and sensor system, as the broad LED excitation range produces a partially selective optical measurement. The instrument has been tested and characterized using fluorescent dyes: limits of detection (LOD) for 9,10-bis(phenylethynyl)-anthracene and rhodamine B were in the mid parts-per-trillion range; detection limits for the other compounds were in the low parts-per-billion range (< 5 ppb). The LED-EEMs were analyzed using parallel factor analysis (PARAFAC), which allowed the mathematical resolution of the individual contributions of the mono- and dianion fluorescein tautomers a priori. Correct identification and quantitation of six fluorescent dyes in two to six component mixtures (concentrations between 12.5 and 500 ppb) has been achieved with root mean squared errors of prediction (RMSEP) of less than 4.0 ppb for all components.

  12. Organizational Context of Human Factors

    DTIC Science & Technology

    1982-11-01

    anthroprometric characteristics of humans (reach, strength, etc.); biological limits of vision, hearing, memory; and work-load issues . This paper is in...developments of the .7A organizational analysis field was the generalization that authoritarian structures led to low morale among personnel, and this led to...emphasized participation in decision making, and freedom to criticize practices, would lead to high morale and more output. Embedded in this

  13. Thermal, optical, and electrical engineering of an innovative tunable white LED light engine

    NASA Astrophysics Data System (ADS)

    Trivellin, Nicola; Meneghini, Matteo; Ferretti, Marco; Barbisan, Diego; Dal Lago, Matteo; Meneghesso, Gaudenzio; Zanoni, Enrico

    2014-02-01

    Color temperature, intensity and blue spectrum of the light affects the ganglion receptors in human brain stimulating the human nervous system. With this work we review different methods for obtaining tunable light emission spectra and propose an innovative white LED lighting system. By an in depth study of the thermal, electrical and optical characteristics of GaN and GaP based compound semiconductors for optoelectronics a specific tunable spectra has been designed. The proposed tunable white LED system is able to achieve high CRI (above 95) in a large CCT range (3000 - 5000K).

  14. Light-emitting diode technology status and directions: Opportunities for horticultural lighting

    DOE PAGES

    Tsao, Jeffrey Y.; Pattison, P. Morgan; Krames, Michael R.

    2016-01-01

    Here, light-emitting diode (LED) technology has advanced rapidly over the last decade, primarily driven by display and general illumination applications ("solid-state lighting (SSL) for humans"). These advancements have made LED lighting technically and economically advantageous not only for these applications, but also, as an indirect benefit, for adjacent applications such as horticultural lighting ("SSL for plants"). Moreover, LED technology has much room for continued improvement. In the near-term, these improvements will continue to be driven by SSL for humans (with indirect benefit to SSL for plants), the most important of which can be anticipated.

  15. Synchronizing movements with the metronome: nonlinear error correction and unstable periodic orbits.

    PubMed

    Engbert, Ralf; Krampe, Ralf Th; Kurths, Jürgen; Kliegl, Reinhold

    2002-02-01

    The control of human hand movements is investigated in a simple synchronization task. We propose and analyze a stochastic model based on nonlinear error correction; a mechanism which implies the existence of unstable periodic orbits. This prediction is tested in an experiment with human subjects. We find that our experimental data are in good agreement with numerical simulations of our theoretical model. These results suggest that feedback control of the human motor systems shows nonlinear behavior. Copyright 2001 Elsevier Science (USA).

  16. Trauma center maturity measured by an analysis of preventable and potentially preventable deaths: there is always something to be learned….

    PubMed

    Matsumoto, Shokei; Jung, Kyoungwon; Smith, Alan; Coimbra, Raul

    2018-06-23

    To establish the preventable and potentially preventable death rates in a mature trauma center and to identify the causes of death and highlight the lessons learned from these cases. We analyzed data from a Level-1 Trauma Center Registry, collected over a 15-year period. Data on demographics, timing of death, and potential errors were collected. Deaths were judged as preventable (PD), potentially preventable (PPD), or non-preventable (NPD), following a strict external peer-review process. During the 15-year period, there were 874 deaths, 15 (1.7%) and 6 (0.7%) of which were considered PPDs and PDs, respectively. Patients in the PD and PPD groups were not sicker and had less severe head injury than those in the NPD group. The time-death distribution differed according to preventability. We identified 21 errors in the PD and PPD groups, but only 61 (7.3%) errors in the NPD group (n = 853). Errors in judgement accounted for the majority and for 90.5% of the PD and PPD group errors. Although the numbers of PDs and PPDs were low, denoting maturity of our trauma center, there are important lessons to be learned about how errors in judgment led to deaths that could have been prevented.

  17. A transparent look at the measurement and application of colour rendering in the use of LED light sources

    NASA Astrophysics Data System (ADS)

    Leuschner, F. W.; Van Der Westhuyzen, J. G. J.

    2014-06-01

    The technology for the measurement of colour rendering and colour quality is not new, but many parameters related to this issue are currently changing. A number of standard methods were developed and are used by different specialty areas of the lighting industry. CIE 13.3 has been the accepted standard implemented by many users and used for many years. Light-emitting Diode (LED) technology moves at a rapid pace and, as this lighting source finds wider acceptance, it appears that traditional colour-rendering measurement methods produce inconsistent results. Practical application of various types of LEDs yielded results that challenged conventional thinking regarding colour measurement of light sources. Recent studies have shown that the anatomy and physiology of the human eye is more complex than formerly accepted. Therefore, the development of updated measurement methodology also forces a fresh look at functioning and colour perception of the human eye, especially with regard to LEDs. This paper includes a short description of the history and need for the measurement of colour rendering. Some of the traditional measurement methods are presented and inadequacies are discussed. The latest discoveries regarding the functioning of the human eye and the perception of colour, especially when LEDs are used as light sources, are discussed. The unique properties of LEDs when used in practical applications such as luminaires are highlighted.

  18. Human error in aviation operations

    NASA Technical Reports Server (NTRS)

    Billings, C. E.; Lanber, J. K.; Cooper, G. E.

    1974-01-01

    This report is a brief description of research being undertaken by the National Aeronautics and Space Administration. The project is designed to seek out factors in the aviation system which contribute to human error, and to search for ways of minimizing the potential threat posed by these factors. The philosophy and assumptions underlying the study are discussed, together with an outline of the research plan.

  19. Global Precipitation Measurement (GPM) Ground Validation: Plans and Preparations

    NASA Technical Reports Server (NTRS)

    Schwaller, M.; Bidwell, S.; Durning, F. J.; Smith, E.

    2004-01-01

    The Global Precipitation Measurement (GPM) program is an international partnership led by the National Aeronautics and Space Administration (NASA) and the Japan Aerospace Exploration Agency (JAXA). GPM will improve climate, weather, and hydro-meteorological forecasts through more frequent and more accurate measurement of precipitation across the globe. This paper describes the concept, the planning, and the preparations for Ground Validation within the GPM program. Ground Validation (GV) plays an important role in the program by investigating and quantitatively assessing the errors within the satellite retrievals. These quantitative estimates of retrieval errors will assist the scientific community by bounding the errors within their research products. The two fundamental requirements of the GPM Ground Validation program are: (1) error characterization of the precipitation retrievals and (2) continual improvement of the satellite retrieval algorithms. These two driving requirements determine the measurements, instrumentation, and location for ground observations. This paper outlines GV plans for estimating the systematic and random components of retrieval error and for characterizing the spatial p d temporal structure of the error and plans for algorithm improvement in which error models are developed and experimentally explored to uncover the physical causes of errors within the retrievals. This paper discusses NASA locations for GV measurements as well as anticipated locations from international GPM partners. NASA's primary locations for validation measurements are an oceanic site at Kwajalein Atoll in the Republic of the Marshall Islands and a continental site in north-central Oklahoma at the U.S. Department of Energy's Atmospheric Radiation Measurement Program site.

  20. Dissociable effects of surprising rewards on learning and memory.

    PubMed

    Rouhani, Nina; Norman, Kenneth A; Niv, Yael

    2018-03-19

    Reward-prediction errors track the extent to which rewards deviate from expectations, and aid in learning. How do such errors in prediction interact with memory for the rewarding episode? Existing findings point to both cooperative and competitive interactions between learning and memory mechanisms. Here, we investigated whether learning about rewards in a high-risk context, with frequent, large prediction errors, would give rise to higher fidelity memory traces for rewarding events than learning in a low-risk context. Experiment 1 showed that recognition was better for items associated with larger absolute prediction errors during reward learning. Larger prediction errors also led to higher rates of learning about rewards. Interestingly we did not find a relationship between learning rate for reward and recognition-memory accuracy for items, suggesting that these two effects of prediction errors were caused by separate underlying mechanisms. In Experiment 2, we replicated these results with a longer task that posed stronger memory demands and allowed for more learning. We also showed improved source and sequence memory for items within the high-risk context. In Experiment 3, we controlled for the difficulty of reward learning in the risk environments, again replicating the previous results. Moreover, this control revealed that the high-risk context enhanced item-recognition memory beyond the effect of prediction errors. In summary, our results show that prediction errors boost both episodic item memory and incremental reward learning, but the two effects are likely mediated by distinct underlying systems. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  1. [Tracing the map of medication errors outside the hospital environment in the Madrid Community].

    PubMed

    Taravilla-Cerdán, Belén; Larrubia-Muñoz, Olga; de la Corte-García, María; Cruz-Martos, Encarnación

    2011-12-01

    Preparation of a map of medication errors reported by health professionals outside hospitals within the framework of Medication Errors Reporting for the Community of Madrid during the period 2008-2009. Retrospective observational study. Notification database of medication errors in the Community of Madrid. Notifications sent to the web page: Safe Use of Medicines and Health Products of the Community of Madrid. Information on the originator of the report, date of incident, shift, type of error and causes, outcome, patient characteristics, stage, place where it was produced and detected, if the medication was administered, lot number, expiry date and the general nature of the drug and a brief description of the incident. There were 5470 medication errors analysed, of which 3412 came from outside hospitals (62%), occurring mainly in the prescription stage (56.92%) and being more reported pharmacists. No harm was done in 92.9% of cases, but there was harm in 4.8% and in 2.3% there was an error that could not be followed up. The centralization of information has led to the confirmation that the prescription is a vulnerable point in the chain of drug therapy. Cleaning up prescription databases, preventing the marketing of commercial presentations that give rise to confusion, enhanced information to professionals and patients, and establishing standardised procedures, and avoiding the use of ambiguous prescriptions, illegible, or abbreviations, are useful strategies to try to minimise these errors. Copyright © 2010 Elsevier España, S.L. All rights reserved.

  2. Preparations for Global Precipitation Measurement(GPM)Ground Validation

    NASA Technical Reports Server (NTRS)

    Bidwell, S. W.; Bibyk, I. K.; Duming, J. F.; Everett, D. F.; Smith, E. A.; Wolff, D. B.

    2004-01-01

    The Global Precipitation Measurement (GPM) program is an international partnership led by the National Aeronautics and Space Administration (NASA) and the Japan Aerospace Exploration Agency (JAXA). GPM will improve climate, weather, and hydro-meterorological forecasts through more frequent and more accurate measurement of precipitation across the globe. This paper describes the concept and the preparations for Ground Validation within the GPM program. Ground Validation (GV) plays a critical role in the program by investigating and quantitatively assessing the errors within the satellite retrievals. These quantitative estimates of retrieval errors will assist the scientific community by bounding the errors within their research products. The two fundamental requirements of the GPM Ground Validation program are: (1) error characterization of the precipitation retrievals and (2) continual improvement of the satellite retrieval algorithms. These two driving requirements determine the measurements, instrumentation, and location for ground observations. This paper describes GV plans for estimating the systematic and random components of retrieval error and for characterizing the spatial and temporal structure of the error. This paper describes the GPM program for algorithm improvement in which error models are developed and experimentally explored to uncover the physical causes of errors within the retrievals. GPM will ensure that information gained through Ground Validation is applied to future improvements in the spaceborne retrieval algorithms. This paper discusses the potential locations for validation measurement and research, the anticipated contributions of GPM's international partners, and the interaction of Ground Validation with other GPM program elements.

  3. Human Factors Directions for Civil Aviation

    NASA Technical Reports Server (NTRS)

    Hart, Sandra G.

    2002-01-01

    Despite considerable progress in understanding human capabilities and limitations, incorporating human factors into aircraft design, operation, and certification, and the emergence of new technologies designed to reduce workload and enhance human performance in the system, most aviation accidents still involve human errors. Such errors occur as a direct or indirect result of untimely, inappropriate, or erroneous actions (or inactions) by apparently well-trained and experienced pilots, controllers, and maintainers. The field of human factors has solved many of the more tractable problems related to simple ergonomics, cockpit layout, symbology, and so on. We have learned much about the relationships between people and machines, but know less about how to form successful partnerships between humans and the information technologies that are beginning to play a central role in aviation. Significant changes envisioned in the structure of the airspace, pilots and controllers' roles and responsibilities, and air/ground technologies will require a similarly significant investment in human factors during the next few decades to ensure the effective integration of pilots, controllers, dispatchers, and maintainers into the new system. Many of the topics that will be addressed are not new because progress in crucial areas, such as eliminating human error, has been slow. A multidisciplinary approach that capitalizes upon human studies and new classes of information, computational models, intelligent analytical tools, and close collaborations with organizations that build, operate, and regulate aviation technology will ensure that the field of human factors meets the challenge.

  4. Erratum: Raman linewidths and rotationally inelastic collision rates in nitrogen [J. Chem. Phys. 98, 257 (1993)

    NASA Astrophysics Data System (ADS)

    Green, Sheldon

    1993-09-01

    A computer program error led to erroneous results in the titled paper. Corrected generalized IOS cross sections are significantly changed, especially at lower collision energies. These changes tend to cancel in predicted Raman linewidths; there is a systematic increase of 10-15 %, changing quantitative, but not qualitative, comparisons with experimental data.

  5. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...

  6. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ....102 Public Welfare Department of Health and Human Services GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...

  7. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...

  8. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...

  9. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND... rates, which is defined as the percentage of cases with an error (expressed as the total number of cases with an error compared to the total number of cases); the percentage of cases with an improper payment...

  10. 42 CFR 407.32 - Prejudice to enrollment rights because of Federal Government misrepresentation, inaction, or error.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Government misrepresentation, inaction, or error. 407.32 Section 407.32 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES MEDICARE PROGRAM SUPPLEMENTARY MEDICAL INSURANCE... enrollment rights because of Federal Government misrepresentation, inaction, or error. If an individual's...

  11. 42 CFR 407.32 - Prejudice to enrollment rights because of Federal Government misrepresentation, inaction, or error.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Government misrepresentation, inaction, or error. 407.32 Section 407.32 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES MEDICARE PROGRAM SUPPLEMENTARY MEDICAL INSURANCE... enrollment rights because of Federal Government misrepresentation, inaction, or error. If an individual's...

  12. Can eye-tracking technology improve situational awareness in paramedic clinical education?

    PubMed

    Williams, Brett; Quested, Andrew; Cooper, Simon

    2013-01-01

    Human factors play a significant part in clinical error. Situational awareness (SA) means being aware of one's surroundings, comprehending the present situation, and being able to predict outcomes. It is a key human skill that, when properly applied, is associated with reducing medical error: eye-tracking technology can be used to provide an objective and qualitative measure of the initial perception component of SA. Feedback from eye-tracking technology can be used to improve the understanding and teaching of SA in clinical contexts, and consequently, has potential for reducing clinician error and the concomitant adverse events.

  13. Accommodation: The role of the external muscles of the eye: A consideration of refractive errors in relation to extraocular malfunction.

    PubMed

    Hargrave, B K

    2014-11-01

    Speculation as to optical malfunction has led to dissatisfaction with the theory that the lens is the sole agent in accommodation and to the suggestion that other parts of the eye are also conjointly involved. Around half-a-century ago, Robert Brooks Simpkins suggested that the mechanical features of the human eye were precisely such as to allow for a lengthening of the globe when the eye accommodated. Simpkins was not an optical man but his theory is both imaginative and comprehensive and deserves consideration. It is submitted here that accommodation is in fact a twofold process, and that although involving the lens, is achieved primarily by means of a give - and - take interplay between adducting and abducting external muscles, whereby an elongation of the eyeball is brought about by a stretching of the delicate elastic fibres immediately behind the cornea. The three muscles responsible for convergence (superior, internal and inferior recti) all pull from in front backwards, while of the three abductors (external rectus and the two obliques) the obliques pull from behind forwards, allowing for an easy elongation as the eye turns inwards and a return to its original length as the abducting muscles regain their former tension, returning the eye to distance vision. In refractive errors, the altered length of the eyeball disturbs the harmonious give - and - take relationship between adductors and abductors. Such stresses are likely to be perpetuated and the error exacerbated. Speculation is not directed towards a search for a possible cause of the muscular imbalance, since none is suspected. Muscles not used rapidly lose tone, as evidenced after removal of a limb from plaster. Early attention to the need for restorative exercise is essential and results usually impressive. If flexibility of the external muscles of the eyes is essential for continuing good sight, presbyopia can be avoided and with it the supposed necessity of glasses in middle life. Early attention to the need for muscle flexibility and for frequent change of focus, it is believed, leads to ocular wellbeing and obviates the reliance on glasses. It is a consideration yet to be widely entertained. The alarming increase in myopia has led to considerable investigation in recent years as to increase in the length of the eyeball. Thus far however there is little agreement regarding causes. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Patient safety in the care of mentally ill people in Switzerland: Action plan 2016

    PubMed

    Richard, Aline; Mascherek, Anna C; Schwappach, David L B

    2017-01-01

    Background: Patient safety in mental healthcare has not attracted great attention yet, although the burden and the prevalence of mental diseases are high. The risk of errors with potential for harm of patients, such as aggression against self and others or non-drug treatment errors is particularly high in this vulnerable group. Aim: To develop priority topics and strategies for action to foster patient safety in mental healthcare. Method: The Swiss patient safety foundation together with experts conducted round table discussions and a Delphi questionnaire to define topics along the treatment pathway, and to prioritise these topics. Finally, fields of action were developed. Results: An action plan was developed including the definition and prioritization of 9 topics where errors may occur. A global rating task revealed errors concerning diagnostics and structural errors as most important. This led to the development of 4 fields of action (awareness raising, research, implementation, and education and training) including practice-oriented potential starting points to enhance patient safety. Conclusions: The action plan highlights issues of high concern for patient safety in mental healthcare. It serves as a starting point for the development of strategies for action as well as of concrete activities.

  15. Impacts of visuomotor sequence learning methods on speed and accuracy: Starting over from the beginning or from the point of error.

    PubMed

    Tanaka, Kanji; Watanabe, Katsumi

    2016-02-01

    The present study examined whether sequence learning led to more accurate and shorter performance time if people who are learning a sequence start over from the beginning when they make an error (i.e., practice the whole sequence) or only from the point of error (i.e., practice a part of the sequence). We used a visuomotor sequence learning paradigm with a trial-and-error procedure. In Experiment 1, we found fewer errors, and shorter performance time for those who restarted their performance from the beginning of the sequence as compared to those who restarted from the point at which an error occurred, indicating better learning of spatial and motor representations of the sequence. This might be because the learned elements were repeated when the next performance started over from the beginning. In subsequent experiments, we increased the occasions for the repetitions of learned elements by modulating the number of fresh start points in the sequence after errors. The results showed that fewer fresh start points were likely to lead to fewer errors and shorter performance time, indicating that the repetitions of learned elements enabled participants to develop stronger spatial and motor representations of the sequence. Thus, a single or two fresh start points in the sequence (i.e., starting over only from the beginning or from the beginning or midpoint of the sequence after errors) is likely to lead to more accurate and faster performance. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Model-based cost-effectiveness analysis of interventions aimed at preventing medication error at hospital admission (medicines reconciliation).

    PubMed

    Karnon, Jonathan; Campbell, Fiona; Czoski-Murray, Carolyn

    2009-04-01

    Medication errors can lead to preventable adverse drug events (pADEs) that have significant cost and health implications. Errors often occur at care interfaces, and various interventions have been devised to reduce medication errors at the point of admission to hospital. The aim of this study is to assess the incremental costs and effects [measured as quality adjusted life years (QALYs)] of a range of such interventions for which evidence of effectiveness exists. A previously published medication errors model was adapted to describe the pathway of errors occurring at admission through to the occurrence of pADEs. The baseline model was populated using literature-based values, and then calibrated to observed outputs. Evidence of effects was derived from a systematic review of interventions aimed at preventing medication error at hospital admission. All five interventions, for which evidence of effectiveness was identified, are estimated to be extremely cost-effective when compared with the baseline scenario. Pharmacist-led reconciliation intervention has the highest expected net benefits, and a probability of being cost-effective of over 60% by a QALY value of pound10 000. The medication errors model provides reasonably strong evidence that some form of intervention to improve medicines reconciliation is a cost-effective use of NHS resources. The variation in the reported effectiveness of the few identified studies of medication error interventions illustrates the need for extreme attention to detail in the development of interventions, but also in their evaluation and may justify the primary evaluation of more than one specification of included interventions.

  17. Recognizing the Ordinary as Extraordinary: Insight Into the "Way We Work" to Improve Patient Safety Outcomes.

    PubMed

    Henneman, Elizabeth A

    2017-07-01

    The Institute of Medicine (now National Academy of Medicine) reports "To Err is Human" and "Crossing the Chasm" made explicit 3 previously unappreciated realities: (1) Medical errors are common and result in serious, preventable adverse events; (2) The majority of medical errors are the result of system versus human failures; and (3) It would be impossible for any system to prevent all errors. With these realities, the role of the nurse in the "near miss" process and as the final safety net for the patient is of paramount importance. The nurse's role in patient safety is described from both a systems perspective and a human factors perspective. Critical care nurses use specific strategies to identify, interrupt, and correct medical errors. Strategies to identify errors include knowing the patient, knowing the plan of care, double-checking, and surveillance. Nursing strategies to interrupt errors include offering assistance, clarifying, and verbally interrupting. Nurses correct errors by persevering, being physically present, reviewing/confirming the plan of care, or involving another nurse or physician. Each of these strategies has implications for education, practice, and research. Surveillance is a key nursing strategy for identifying medical errors and reducing adverse events. Eye-tracking technology is a novel approach for evaluating the surveillance process during common, high-risk processes such as blood transfusion and medication administration. Eye tracking has also been used to examine the impact of interruptions to care caused by bedside alarms as well as by other health care personnel. Findings from this safety-related eye-tracking research provide new insight into effective bedside surveillance and interruption management strategies. ©2017 American Association of Critical-Care Nurses.

  18. Preventing medical errors by designing benign failures.

    PubMed

    Grout, John R

    2003-07-01

    One way to successfully reduce medical errors is to design health care systems that are more resistant to the tendencies of human beings to err. One interdisciplinary approach entails creating design changes, mitigating human errors, and making human error irrelevant to outcomes. This approach is intended to facilitate the creation of benign failures, which have been called mistake-proofing devices and forcing functions elsewhere. USING FAULT TREES TO DESIGN FORCING FUNCTIONS: A fault tree is a graphical tool used to understand the relationships that either directly cause or contribute to the cause of a particular failure. A careful analysis of a fault tree enables the analyst to anticipate how the process will behave after the change. EXAMPLE OF AN APPLICATION: A scenario in which a patient is scalded while bathing can serve as an example of how multiple fault trees can be used to design forcing functions. The first fault tree shows the undesirable event--patient scalded while bathing. The second fault tree has a benign event--no water. Adding a scald valve changes the outcome from the undesirable event ("patient scalded while bathing") to the benign event ("no water") Analysis of fault trees does not ensure or guarantee that changes necessary to eliminate error actually occur. Most mistake-proofing is used to prevent simple errors and to create well-defended processes, but complex errors can also result. The utilization of mistake-proofing or forcing functions can be thought of as changing the logic of a process. Errors that formerly caused undesirable failures can be converted into the causes of benign failures. The use of fault trees can provide a variety of insights into the design of forcing functions that will improve patient safety.

  19. A two dimensional interface element for coupling of independently modeled three dimensional finite element meshes and extensions to dynamic and non-linear regimes

    NASA Technical Reports Server (NTRS)

    Aminpour, Mohammad

    1995-01-01

    The work reported here pertains only to the first year of research for a three year proposal period. As a prelude to this two dimensional interface element, the one dimensional element was tested and errors were discovered in the code for built-up structures and curved interfaces. These errors were corrected and the benchmark Boeing composite crown panel was analyzed successfully. A study of various splines led to the conclusion that cubic B-splines best suit this interface element application. A least squares approach combined with cubic B-splines was constructed to make a smooth function from the noisy data obtained with random error in the coordinate data points of the Boeing crown panel analysis. Preliminary investigations for the formulation of discontinuous 2-D shell and 3-D solid elements were conducted.

  20. Uncorrected refractive errors

    PubMed Central

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship. PMID:22944755

  1. Learning and retention through predictive inference and classification.

    PubMed

    Sakamoto, Yasuaki; Love, Bradley C

    2010-12-01

    Work in category learning addresses how humans acquire knowledge and, thus, should inform classroom practices. In two experiments, we apply and evaluate intuitions garnered from laboratory-based research in category learning to learning tasks situated in an educational context. In Experiment 1, learning through predictive inference and classification were compared for fifth-grade students using class-related materials. Making inferences about properties of category members and receiving feedback led to the acquisition of both queried (i.e., tested) properties and nonqueried properties that were correlated with a queried property (e.g., even if not queried, students learned about a species' habitat because it correlated with a queried property, like the species' size). In contrast, classifying items according to their species and receiving feedback led to knowledge of only the property most diagnostic of category membership. After multiple-day delay, the fifth-graders who learned through inference selectively retained information about the queried properties, and the fifth-graders who learned through classification retained information about the diagnostic property, indicating a role for explicit evaluation in establishing memories. Overall, inference learning resulted in fewer errors, better retention, and more liking of the categories than did classification learning. Experiment 2 revealed that querying a property only a few times was enough to manifest the full benefits of inference learning in undergraduate students. These results suggest that classroom teaching should emphasize reasoning from the category to multiple properties rather than from a set of properties to the category. (PsycINFO Database Record (c) 2010 APA, all rights reserved).

  2. The effects of recall errors and of selection bias in epidemiologic studies of mobile phone use and cancer risk.

    PubMed

    Vrijheid, Martine; Deltour, Isabelle; Krewski, Daniel; Sanchez, Marie; Cardis, Elisabeth

    2006-07-01

    This paper examines the effects of systematic and random errors in recall and of selection bias in case-control studies of mobile phone use and cancer. These sensitivity analyses are based on Monte-Carlo computer simulations and were carried out within the INTERPHONE Study, an international collaborative case-control study in 13 countries. Recall error scenarios simulated plausible values of random and systematic, non-differential and differential recall errors in amount of mobile phone use reported by study subjects. Plausible values for the recall error were obtained from validation studies. Selection bias scenarios assumed varying selection probabilities for cases and controls, mobile phone users, and non-users. Where possible these selection probabilities were based on existing information from non-respondents in INTERPHONE. Simulations used exposure distributions based on existing INTERPHONE data and assumed varying levels of the true risk of brain cancer related to mobile phone use. Results suggest that random recall errors of plausible levels can lead to a large underestimation in the risk of brain cancer associated with mobile phone use. Random errors were found to have larger impact than plausible systematic errors. Differential errors in recall had very little additional impact in the presence of large random errors. Selection bias resulting from underselection of unexposed controls led to J-shaped exposure-response patterns, with risk apparently decreasing at low to moderate exposure levels. The present results, in conjunction with those of the validation studies conducted within the INTERPHONE study, will play an important role in the interpretation of existing and future case-control studies of mobile phone use and cancer risk, including the INTERPHONE study.

  3. Surgical screw segmentation for mobile C-arm CT devices

    NASA Astrophysics Data System (ADS)

    Görres, Joseph; Brehler, Michael; Franke, Jochen; Wolf, Ivo; Vetter, Sven Y.; Grützner, Paul A.; Meinzer, Hans-Peter; Nabers, Diana

    2014-03-01

    Calcaneal fractures are commonly treated by open reduction and internal fixation. An anatomical reconstruction of involved joints is mandatory to prevent cartilage damage and premature arthritis. In order to avoid intraarticular screw placements, the use of mobile C-arm CT devices is required. However, for analyzing the screw placement in detail, a time-consuming human-computer interaction is necessary to navigate through 3D images and therefore to view a single screw in detail. Established interaction procedures of repeatedly positioning and rotating sectional planes are inconvenient and impede the intraoperative assessment of the screw positioning. To simplify the interaction with 3D images, we propose an automatic screw segmentation that allows for an immediate selection of relevant sectional planes. Our algorithm consists of three major steps. At first, cylindrical characteristics are determined from local gradient structures with the help of RANSAC. In a second step, a DBScan clustering algorithm is applied to group similar cylinder characteristics. Each detected cluster represents a screw, whose determined location is then refined by a cylinder-to-image registration in a third step. Our evaluation with 309 screws in 50 images shows robust and precise results. The algorithm detected 98% (303) of the screws correctly. Thirteen clusters led to falsely identified screws. The mean distance error for the screw tip was 0.8 +/- 0.8 mm and for the screw head 1.2 +/- 1 mm. The mean orientation error was 1.4 +/- 1.2 degrees.

  4. Distinct prediction errors in mesostriatal circuits of the human brain mediate learning about the values of both states and actions: evidence from high-resolution fMRI.

    PubMed

    Colas, Jaron T; Pauli, Wolfgang M; Larsen, Tobias; Tyszka, J Michael; O'Doherty, John P

    2017-10-01

    Prediction-error signals consistent with formal models of "reinforcement learning" (RL) have repeatedly been found within dopaminergic nuclei of the midbrain and dopaminoceptive areas of the striatum. However, the precise form of the RL algorithms implemented in the human brain is not yet well determined. Here, we created a novel paradigm optimized to dissociate the subtypes of reward-prediction errors that function as the key computational signatures of two distinct classes of RL models-namely, "actor/critic" models and action-value-learning models (e.g., the Q-learning model). The state-value-prediction error (SVPE), which is independent of actions, is a hallmark of the actor/critic architecture, whereas the action-value-prediction error (AVPE) is the distinguishing feature of action-value-learning algorithms. To test for the presence of these prediction-error signals in the brain, we scanned human participants with a high-resolution functional magnetic-resonance imaging (fMRI) protocol optimized to enable measurement of neural activity in the dopaminergic midbrain as well as the striatal areas to which it projects. In keeping with the actor/critic model, the SVPE signal was detected in the substantia nigra. The SVPE was also clearly present in both the ventral striatum and the dorsal striatum. However, alongside these purely state-value-based computations we also found evidence for AVPE signals throughout the striatum. These high-resolution fMRI findings suggest that model-free aspects of reward learning in humans can be explained algorithmically with RL in terms of an actor/critic mechanism operating in parallel with a system for more direct action-value learning.

  5. Distinct prediction errors in mesostriatal circuits of the human brain mediate learning about the values of both states and actions: evidence from high-resolution fMRI

    PubMed Central

    Pauli, Wolfgang M.; Larsen, Tobias; Tyszka, J. Michael; O’Doherty, John P.

    2017-01-01

    Prediction-error signals consistent with formal models of “reinforcement learning” (RL) have repeatedly been found within dopaminergic nuclei of the midbrain and dopaminoceptive areas of the striatum. However, the precise form of the RL algorithms implemented in the human brain is not yet well determined. Here, we created a novel paradigm optimized to dissociate the subtypes of reward-prediction errors that function as the key computational signatures of two distinct classes of RL models—namely, “actor/critic” models and action-value-learning models (e.g., the Q-learning model). The state-value-prediction error (SVPE), which is independent of actions, is a hallmark of the actor/critic architecture, whereas the action-value-prediction error (AVPE) is the distinguishing feature of action-value-learning algorithms. To test for the presence of these prediction-error signals in the brain, we scanned human participants with a high-resolution functional magnetic-resonance imaging (fMRI) protocol optimized to enable measurement of neural activity in the dopaminergic midbrain as well as the striatal areas to which it projects. In keeping with the actor/critic model, the SVPE signal was detected in the substantia nigra. The SVPE was also clearly present in both the ventral striatum and the dorsal striatum. However, alongside these purely state-value-based computations we also found evidence for AVPE signals throughout the striatum. These high-resolution fMRI findings suggest that model-free aspects of reward learning in humans can be explained algorithmically with RL in terms of an actor/critic mechanism operating in parallel with a system for more direct action-value learning. PMID:29049406

  6. Minimizing human error in radiopharmaceutical preparation and administration via a bar code-enhanced nuclear pharmacy management system.

    PubMed

    Hakala, John L; Hung, Joseph C; Mosman, Elton A

    2012-09-01

    The objective of this project was to ensure correct radiopharmaceutical administration through the use of a bar code system that links patient and drug profiles with on-site information management systems. This new combined system would minimize the amount of manual human manipulation, which has proven to be a primary source of error. The most common reason for dosing errors is improper patient identification when a dose is obtained from the nuclear pharmacy or when a dose is administered. A standardized electronic transfer of information from radiopharmaceutical preparation to injection will further reduce the risk of misadministration. Value stream maps showing the flow of the patient dose information, as well as potential points of human error, were developed. Next, a future-state map was created that included proposed corrections for the most common critical sites of error. Transitioning the current process to the future state will require solutions that address these sites. To optimize the future-state process, a bar code system that links the on-site radiology management system with the nuclear pharmacy management system was proposed. A bar-coded wristband connects the patient directly to the electronic information systems. The bar code-enhanced process linking the patient dose with the electronic information reduces the number of crucial points for human error and provides a framework to ensure that the prepared dose reaches the correct patient. Although the proposed flowchart is designed for a site with an in-house central nuclear pharmacy, much of the framework could be applied by nuclear medicine facilities using unit doses. An electronic connection between information management systems to allow the tracking of a radiopharmaceutical from preparation to administration can be a useful tool in preventing the mistakes that are an unfortunate reality for any facility.

  7. A Human Systems Integration Perspective to Evaluating Naval Aviation Mishaps and Developing Intervention Strategies

    DTIC Science & Technology

    2009-12-01

    SWISS CHEESE ” MODEL........................................... 16 1. Errors and Violations...16 Figure 5. Reason’s Swiss Cheese Model (After: Reason, 1990, p. 208) ........... 20 Figure 6. The HFACS Swiss Cheese Model of...become more complex. E. REASON’S “ SWISS CHEESE ” MODEL Reason’s (1990) book, Human Error, is generally regarded as the seminal work on the subject

  8. Discovering Innovation at the Intersection of Undergraduate Medical Education, Human Factors, and Collaboration: The Development of a Nasogastric Tube Safety Pack.

    PubMed

    Taylor, Natalie; Bamford, Thomas; Haindl, Cornelia; Cracknell, Alison

    2016-04-01

    Significant deficiencies exist in the knowledge and skills of medical students and residents around health care quality and safety. The theory and practice of quality and safety should be embedded into undergraduate medical practice so that health care professionals are capable of developing interventions and innovations to effectively anticipate and mitigate errors. Since 2011, Leeds Medical School in the United Kingdom has used case study examples of nasogastric (NG) tube patient safety incidents within the undergraduate patient safety curriculum. In 2012, a medical undergraduate student approached a clinician with an innovative idea after undertaking an NG tubes root cause analysis case study. Simultaneously, a separate local project demonstrated low compliance (11.6%) with the United Kingdom's National Patient Safety Agency NG tubes guideline for use of the correct method to check tube position. These separate endeavors led to interdisciplinary collaboration between a medical student, health care professionals, researchers, and industry to develop the Initial Placement Nasogastric Tube Safety Pack. Human factors engineering was used to inform pack design to allow guideline recommendations to be accessible and easy to follow. A timeline of product development, mapped against key human factors and medical device design principles used throughout the process, is presented. The safety pack has since been launched in five UK National Health Service (NHS) hospitals, and the pack has been introduced into health care professional staff training for NG tubes. A mixed-methods evaluation is currently under way in five NHS organizations.

  9. Assisting Movement Training and Execution With Visual and Haptic Feedback.

    PubMed

    Ewerton, Marco; Rother, David; Weimar, Jakob; Kollegger, Gerrit; Wiemeyer, Josef; Peters, Jan; Maeda, Guilherme

    2018-01-01

    In the practice of motor skills in general, errors in the execution of movements may go unnoticed when a human instructor is not available. In this case, a computer system or robotic device able to detect movement errors and propose corrections would be of great help. This paper addresses the problem of how to detect such execution errors and how to provide feedback to the human to correct his/her motor skill using a general, principled methodology based on imitation learning. The core idea is to compare the observed skill with a probabilistic model learned from expert demonstrations. The intensity of the feedback is regulated by the likelihood of the model given the observed skill. Based on demonstrations, our system can, for example, detect errors in the writing of characters with multiple strokes. Moreover, by using a haptic device, the Haption Virtuose 6D, we demonstrate a method to generate haptic feedback based on a distribution over trajectories, which could be used as an auxiliary means of communication between an instructor and an apprentice. Additionally, given a performance measurement, the haptic device can help the human discover and perform better movements to solve a given task. In this case, the human first tries a few times to solve the task without assistance. Our framework, in turn, uses a reinforcement learning algorithm to compute haptic feedback, which guides the human toward better solutions.

  10. The Error in Total Error Reduction

    PubMed Central

    Witnauer, James E.; Urcelay, Gonzalo P.; Miller, Ralph R.

    2013-01-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modelling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. PMID:23891930

  11. Nature and Nurture: the complex genetics of myopia and refractive error

    PubMed Central

    Wojciechowski, Robert

    2010-01-01

    The refractive errors, myopia and hyperopia, are optical defects of the visual system that can cause blurred vision. Uncorrected refractive errors are the most common causes of visual impairment worldwide. It is estimated that 2.5 billion people will be affected by myopia alone with in the next decade. Experimental, epidemiological and clinical research has shown that refractive development is influenced by both environmental and genetic factors. Animal models have demonstrated that eye growth and refractive maturation during infancy are tightly regulated by visually-guided mechanisms. Observational data in human populations provide compelling evidence that environmental influences and individual behavioral factors play crucial roles in myopia susceptibility. Nevertheless, the majority of the variance of refractive error within populations is thought to be due to hereditary factors. Genetic linkage studies have mapped two dozen loci, while association studies have implicated more than 25 different genes in refractive variation. Many of these genes are involved in common biological pathways known to mediate extracellular matrix composition and regulate connective tissue remodeling. Other associated genomic regions suggest novel mechanisms in the etiology of human myopia, such as mitochondrial-mediated cell death or photoreceptor-mediated visual signal transmission. Taken together, observational and experimental studies have revealed the complex nature of human refractive variation, which likely involves variants in several genes and functional pathways. Multiway interactions between genes and/or environmental factors may also be important in determining individual risks of myopia, and may help explain the complex pattern of refractive error in human populations. PMID:21155761

  12. Wafer bonding process for building MEMS devices

    NASA Astrophysics Data System (ADS)

    Pabo, Eric F.; Meiler, Josef; Matthias, Thorsten

    2014-06-01

    The technology for the measurement of colour rendering and colour quality is not new, but many parameters related to this issue are currently changing. A number of standard methods were developed and are used by different specialty areas of the lighting industry. CIE 13.3 has been the accepted standard implemented by many users and used for many years. Light-emitting Diode (LED) technology moves at a rapid pace and, as this lighting source finds wider acceptance, it appears that traditional colour-rendering measurement methods produce inconsistent results. Practical application of various types of LEDs yielded results that challenged conventional thinking regarding colour measurement of light sources. Recent studies have shown that the anatomy and physiology of the human eye is more complex than formerly accepted. Therefore, the development of updated measurement methodology also forces a fresh look at functioning and colour perception of the human eye, especially with regard to LEDs. This paper includes a short description of the history and need for the measurement of colour rendering. Some of the traditional measurement methods are presented and inadequacies are discussed. The latest discoveries regarding the functioning of the human eye and the perception of colour, especially when LEDs are used as light sources, are discussed. The unique properties of LEDs when used in practical applications such as luminaires are highlighted.

  13. Paediatric Patient Safety and the Need for Aviation Black Box Thinking to Learn From and Prevent Medication Errors.

    PubMed

    Huynh, Chi; Wong, Ian C K; Correa-West, Jo; Terry, David; McCarthy, Suzanne

    2017-04-01

    Since the publication of To Err Is Human: Building a Safer Health System in 1999, there has been much research conducted into the epidemiology, nature and causes of medication errors in children, from prescribing and supply to administration. It is reassuring to see growing evidence of improving medication safety in children; however, based on media reports, it can be seen that serious and fatal medication errors still occur. This critical opinion article examines the problem of medication errors in children and provides recommendations for research, training of healthcare professionals and a culture shift towards dealing with medication errors. There are three factors that we need to consider to unravel what is missing and why fatal medication errors still occur. (1) Who is involved and affected by the medication error? (2) What factors hinder staff and organisations from learning from mistakes? Does the fear of litigation and criminal charges deter healthcare professionals from voluntarily reporting medication errors? (3) What are the educational needs required to prevent medication errors? It is important to educate future healthcare professionals about medication errors and human factors to prevent these from happening. Further research is required to apply aviation's 'black box' principles in healthcare to record and learn from near misses and errors to prevent future events. There is an urgent need for the black box investigations to be published and made public for the benefit of other organisations that may have similar potential risks for adverse events. International sharing of investigations and learning is also needed.

  14. Error propagation in energetic carrying capacity models

    USGS Publications Warehouse

    Pearse, Aaron T.; Stafford, Joshua D.

    2014-01-01

    Conservation objectives derived from carrying capacity models have been used to inform management of landscapes for wildlife populations. Energetic carrying capacity models are particularly useful in conservation planning for wildlife; these models use estimates of food abundance and energetic requirements of wildlife to target conservation actions. We provide a general method for incorporating a foraging threshold (i.e., density of food at which foraging becomes unprofitable) when estimating food availability with energetic carrying capacity models. We use a hypothetical example to describe how past methods for adjustment of foraging thresholds biased results of energetic carrying capacity models in certain instances. Adjusting foraging thresholds at the patch level of the species of interest provides results consistent with ecological foraging theory. Presentation of two case studies suggest variation in bias which, in certain instances, created large errors in conservation objectives and may have led to inefficient allocation of limited resources. Our results also illustrate how small errors or biases in application of input parameters, when extrapolated to large spatial extents, propagate errors in conservation planning and can have negative implications for target populations.

  15. Activation of Wnt/β-catenin signaling is involved in hair growth-promoting effect of 655-nm red light and LED in in vitro culture model.

    PubMed

    Han, Le; Liu, Ben; Chen, Xianyan; Chen, Haiyan; Deng, Wenjia; Yang, Changsheng; Ji, Bin; Wan, Miaojian

    2018-04-01

    Activation of the Wnt/β-catenin signaling pathway plays an important role in hair follicle morphogenesis and hair growth. Recently, low-level laser therapy (LLLT) was evaluated for stimulating hair growth in numerous clinical studies, in which 655-nm red light was found to be most effective and practical for stimulating hair growth. We evaluated whether 655-nm red light + light-emitting diode (LED) could promote human hair growth by activating Wnt/β-catenin signaling. An in vitro culture of human hair follicles (HFs) was irradiated with different intensities of 655-nm red light + LED, 21 h7 (an inhibitor of β-catenin), or both. Immunofluorescence staining was performed to assess the expression of β-catenin, GSK3β, p-GSK3β, and Lef1 in the Wnt/β-catenin signaling. The 655-nm red light + LED not only enhanced hair shaft elongation, but also reduced catagen transition in human hair follicle organ culture, with the greatest effectiveness observed at 5 min (0.839 J/cm 2 ). Additionally, 655-nm red light + LED enhanced the expression of β-catenin, p-GSK3β, and Lef1, signaling molecules of the Wnt/β-catenin pathway, in the hair matrix. Activation of Wnt/β-catenin signaling is involved in hair growth-promoting effect of 655-nm red light and LED in vitro and therefore may serve as an alternative therapeutic option for alopecia.

  16. Integrating Safety in the Aviation System: Interdepartmental Training for Pilots and Maintenance Technicians

    NASA Technical Reports Server (NTRS)

    Mattson, Marifran; Petrin, Donald A.; Young, John P.

    2001-01-01

    The study of human factors has had a decisive impact on the aviation industry. However, the entire aviation system often is not considered in researching, training, and evaluating human factors issues especially with regard to safety. In both conceptual and practical terms, we argue for the proactive management of human error from both an individual and organizational systems perspective. The results of a multidisciplinary research project incorporating survey data from professional pilots and maintenance technicians and an exploratory study integrating students from relevant disciplines are reported. Survey findings suggest that latent safety errors may occur during the maintenance discrepancy reporting process because pilots and maintenance technicians do not effectively interact with one another. The importance of interdepartmental or cross-disciplinary training for decreasing these errors and increasing safety is discussed as a primary implication.

  17. Human Error and Commercial Aviation Accidents: A Comprehensive, Fine-Grained Analysis Using HFACS

    DTIC Science & Technology

    2006-07-01

    Factors Figure 2. The HFACS framework. 3 practiced and seemingly automatic behaviors is that they are particularly susceptible to attention and/or memory...been included in most error frameworks, the third and final error form, perceptual errors, has received comparatively less attention . No less...operate safely. After all, just as not everyone can play linebacker for their favorite professional football team or be a concert pianist , not

  18. The Quest for Rigour in Physics--The Life and Legacy of John William Warren

    ERIC Educational Resources Information Center

    Leadstone, Stuart

    2017-01-01

    For over half a century, starting around 1960, physics education was put under the intellectual microscope of a London-based university lecturer--Dr John Warren. His scrutiny of physics textbooks and examination papers in particular led him to conduct a sustained assault on error, ambiguity and lack of rigour in the presentation of our subject.…

  19. A Real-Time Systems Symposium Preprint.

    DTIC Science & Technology

    1983-09-01

    Real - Time Systems Symposium Preprint Interim Tech...estimate of the occurence of the error. Unclassii ledSECUqITY CLASSIF’ICA T" NO MI*IA If’ inDI /’rrd erter for~~ble. ’Corrputnqg A REAL - TIME SYSTEMS SYMPOSIUM...ABSTRACT This technical report contains a preprint of a paper accepted for presentation at the REAL - TIME SYSTEMS SYMPOSIUM, Arlington,

  20. Publisher Correction: Tunnelling spectroscopy of gate-induced superconductivity in MoS2

    NASA Astrophysics Data System (ADS)

    Costanzo, Davide; Zhang, Haijing; Reddy, Bojja Aditya; Berger, Helmuth; Morpurgo, Alberto F.

    2018-06-01

    In the version of this Article originally published, an error during typesetting led to the curve in Fig. 2a being shifted to the right, and the curves in the inset of Fig. 2a being displaced. The figure has now been corrected in all versions of the Article; the original and corrected Fig. 2a are shown below.

  1. Corrigendum to "Nearest neighbor imputation of species-level, plot-scale forest structure attributes from LiDAR data"

    Treesearch

    Andrew T. Hudak; Nicholas L. Crookston; Jeffrey S. Evans; David E. hall; Michael J. Falkowski

    2009-01-01

    The authors regret that an error was discovered in the code within the R software package, yaImpute (Crookston & Finley, 2008), which led to incorrect results reported in the above article. The Most Similar Neighbor (MSN) method computes the distance between reference observations and target observations in a projected space defined using canonical correlation...

  2. Indoor positioning algorithm combined with angular vibration compensation and the trust region technique based on received signal strength-visible light communication

    NASA Astrophysics Data System (ADS)

    Wang, Jin; Li, Haoxu; Zhang, Xiaofeng; Wu, Rangzhong

    2017-05-01

    Indoor positioning using visible light communication has become a topic of intensive research in recent years. Because the normal of the receiver always deviates from that of the transmitter in application, the positioning systems which require that the normal of the receiver be aligned with that of the transmitter have large positioning errors. Some algorithms take the angular vibrations into account; nevertheless, these positioning algorithms cannot meet the requirement of high accuracy or low complexity. A visible light positioning algorithm combined with angular vibration compensation is proposed. The angle information from the accelerometer or other angle acquisition devices is used to calculate the angle of incidence even when the receiver is not horizontal. Meanwhile, a received signal strength technique with high accuracy is employed to determine the location. Moreover, an eight-light-emitting-diode (LED) system model is provided to improve the accuracy. The simulation results show that the proposed system can achieve a low positioning error with low complexity, and the eight-LED system exhibits improved performance. Furthermore, trust region-based positioning is proposed to determine three-dimensional locations and achieves high accuracy in both the horizontal and the vertical components.

  3. Disparity between online and offline tests in accelerated aging tests of LED lamps under electric stress.

    PubMed

    Wang, Yao; Jing, Lei; Ke, Hong-Liang; Hao, Jian; Gao, Qun; Wang, Xiao-Xun; Sun, Qiang; Xu, Zhi-Jun

    2016-09-20

    The accelerated aging tests under electric stress for one type of LED lamp are conducted, and the differences between online and offline tests of the degradation of luminous flux are studied in this paper. The transformation of the two test modes is achieved with an adjustable AC voltage stabilized power source. Experimental results show that the exponential fitting of the luminous flux degradation in online tests possesses a higher fitting degree for most lamps, and the degradation rate of the luminous flux by online tests is always lower than that by offline tests. Bayes estimation and Weibull distribution are used to calculate the failure probabilities under the accelerated voltages, and then the reliability of the lamps under rated voltage of 220 V is estimated by use of the inverse power law model. Results show that the relative error of the lifetime estimation by offline tests increases as the failure probability decreases, and it cannot be neglected when the failure probability is less than 1%. The relative errors of lifetime estimation are 7.9%, 5.8%, 4.2%, and 3.5%, at the failure probabilities of 0.1%, 1%, 5%, and 10%, respectively.

  4. Aircrew perceived stress: examining crew performance, crew position and captains personality.

    PubMed

    Bowles, S; Ursin, H; Picano, J

    2000-11-01

    This study was conducted at NASA Ames Research Center as a part of a larger research project assessing the impact of captain's personality on crew performance and perceived stress in 24 air transport crews (5). Three different personality types for captains were classified based on a previous cluster analysis (3). Crews were comprised of three crewmembers: captain, first officer, and second officer/flight engineer. A total of 72 pilots completed a 1.5-d full-mission simulation of airline operations including emergency situations in the Ames Manned Vehicle System Research Facility B-727 simulator. Crewmembers were tested for perceived stress on four dimensions of the NASA Task Load Index after each of five flight legs. Crews were divided into three groups based on rankings from combined error and rating scores. High performance crews (who committed the least errors in flight) reported experiencing less stress in simulated flight than either low or medium crews. When comparing crew positions for perceived stress over all the simulated flights no significant differences were found. However, the crews led by the "Right Stuff" (e.g., active, warm, confident, competitive, and preferring excellence and challenges) personality type captains typically reported less stress than crewmembers led by other personality types.

  5. [Effect of Mn(II) on the error-prone DNA polymerase iota activity in extracts from human normal and tumor cells].

    PubMed

    Lakhin, A V; Efremova, A S; Makarova, I V; Grishina, E E; Shram, S I; Tarantul, V Z; Gening, L V

    2013-01-01

    The DNA polymerase iota (Pol iota), which has some peculiar features and is characterized by an extremely error-prone DNA synthesis, belongs to the group of enzymes preferentially activated by Mn2+ instead of Mg2+. In this work, the effect of Mn2+ on DNA synthesis in cell extracts from a) normal human and murine tissues, b) human tumor (uveal melanoma), and c) cultured human tumor cell lines SKOV-3 and HL-60 was tested. Each group displayed characteristic features of Mn-dependent DNA synthesis. The changes in the Mn-dependent DNA synthesis caused by malignant transformation of normal tissues are described. It was also shown that the error-prone DNA synthesis catalyzed by Pol iota in extracts of all cell types was efficiently suppressed by an RNA aptamer (IKL5) against Pol iota obtained in our work earlier. The obtained results suggest that IKL5 might be used to suppress the enhanced activity of Pol iota in tumor cells.

  6. Standardization of UV LED measurements

    NASA Astrophysics Data System (ADS)

    Eppeldauer, G. P.; Larason, T. C.; Yoon, H. W.

    2015-09-01

    Traditionally used source spectral-distribution or detector spectral-response based standards cannot be applied for accurate UV LED measurements. Since the CIE standardized rectangular-shape spectral response function for UV measurements cannot be realized with small spectral mismatch when using filtered detectors, the UV measurement errors can be several times ten percent or larger. The UV LEDs produce broadband radiation and both their peaks or spectral bandwidths can change significantly. The detectors used for the measurement of these LEDs also have different spectral bandwidths. In the discussed example, where LEDs with 365 nm peak are applied for fluorescent crack-recognition using liquid penetrant (non-destructive) inspection, the broadband radiometric LED (signal) measurement procedure is standardized. A UV LED irradiance-source was calibrated against an FEL lamp standard to determine its spectral irradiance. The spectral irradiance responsivity of a reference UV meter was also calibrated. The output signal of the reference UV meter was calculated from the spectral irradiance of the UV source and the spectral irradiance responsivity of the reference UV meter. From the output signal, both the integrated irradiance (in the reference plane of the reference meter) and the integrated responsivity of the reference meter were determined. Test UV meters calibrated for integrated responsivity against the reference UV meter, can be used to determine the integrated irradiance from a field UV source. The obtained 5 % (k=2) measurement uncertainty can be decreased when meters with spectral response close to a constant value are selected.

  7. Experimental methods to validate measures of emotional state and readiness for duty in critical operations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weston, Louise Marie

    2007-09-01

    A recent report on criticality accidents in nuclear facilities indicates that human error played a major role in a significant number of incidents with serious consequences and that some of these human errors may be related to the emotional state of the individual. A pre-shift test to detect a deleterious emotional state could reduce the occurrence of such errors in critical operations. The effectiveness of pre-shift testing is a challenge because of the need to gather predictive data in a relatively short test period and the potential occurrence of learning effects due to a requirement for frequent testing. This reportmore » reviews the different types of reliability and validity methods and testing and statistical analysis procedures to validate measures of emotional state. The ultimate value of a validation study depends upon the percentage of human errors in critical operations that are due to the emotional state of the individual. A review of the literature to identify the most promising predictors of emotional state for this application is highly recommended.« less

  8. Multi-muscle FES force control of the human arm for arbitrary goals.

    PubMed

    Schearer, Eric M; Liao, Yu-Wei; Perreault, Eric J; Tresch, Matthew C; Memberg, William D; Kirsch, Robert F; Lynch, Kevin M

    2014-05-01

    We present a method for controlling a neuroprosthesis for a paralyzed human arm using functional electrical stimulation (FES) and characterize the errors of the controller. The subject has surgically implanted electrodes for stimulating muscles in her shoulder and arm. Using input/output data, a model mapping muscle stimulations to isometric endpoint forces measured at the subject's hand was identified. We inverted the model of this redundant and coupled multiple-input multiple-output system by minimizing muscle activations and used this inverse for feedforward control. The magnitude of the total root mean square error over a grid in the volume of achievable isometric endpoint force targets was 11% of the total range of achievable forces. Major sources of error were random error due to trial-to-trial variability and model bias due to nonstationary system properties. Because the muscles working collectively are the actuators of the skeletal system, the quantification of errors in force control guides designs of motion controllers for multi-joint, multi-muscle FES systems that can achieve arbitrary goals.

  9. The current approach to human error and blame in the NHS.

    PubMed

    Ottewill, Melanie

    There is a large body of research to suggest that serious errors are widespread throughout medicine. The traditional response to these adverse events has been to adopt a 'person approach' - blaming the individual seen as 'responsible'. The culture of medicine is highly complicit in this response. Such an approach results in enormous personal costs to the individuals concerned and does little to address the root causes of errors and thus prevent their recurrence. Other industries, such as aviation, where safety is a paramount concern and which have similar structures to the medical profession, have, over the past decade or so, adopted a 'systems' approach to error, recognizing that human error is ubiquitous and inevitable and that systems need to be developed with this in mind. This approach has been highly successful, but has necessitated, first and foremost, a cultural shift. It is in the best interests of patients, and medical professionals alike, that such a shift is embraced in the NHS.

  10. Cognitive errors: thinking clearly when it could be child maltreatment.

    PubMed

    Laskey, Antoinette L

    2014-10-01

    Cognitive errors have been studied in a broad array of fields, including medicine. The more that is understood about how the human mind processes complex information, the more it becomes clear that certain situations are particularly susceptible to less than optimal outcomes because of these errors. This article explores how some of the known cognitive errors may influence the diagnosis of child abuse, resulting in both false-negative and false-positive diagnoses. Suggested remedies for these errors are offered. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Passarge, M; Fix, M K; Manser, P

    Purpose: To create and test an accurate EPID-frame-based VMAT QA metric to detect gross dose errors in real-time and to provide information about the source of error. Methods: A Swiss cheese model was created for an EPID-based real-time QA process. The system compares a treatmentplan- based reference set of EPID images with images acquired over each 2° gantry angle interval. The metric utilizes a sequence of independent consecutively executed error detection Methods: a masking technique that verifies infield radiation delivery and ensures no out-of-field radiation; output normalization checks at two different stages; global image alignment to quantify rotation, scaling andmore » translation; standard gamma evaluation (3%, 3 mm) and pixel intensity deviation checks including and excluding high dose gradient regions. Tolerances for each test were determined. For algorithm testing, twelve different types of errors were selected to modify the original plan. Corresponding predictions for each test case were generated, which included measurement-based noise. Each test case was run multiple times (with different noise per run) to assess the ability to detect introduced errors. Results: Averaged over five test runs, 99.1% of all plan variations that resulted in patient dose errors were detected within 2° and 100% within 4° (∼1% of patient dose delivery). Including cases that led to slightly modified but clinically equivalent plans, 91.5% were detected by the system within 2°. Based on the type of method that detected the error, determination of error sources was achieved. Conclusion: An EPID-based during-treatment error detection system for VMAT deliveries was successfully designed and tested. The system utilizes a sequence of methods to identify and prevent gross treatment delivery errors. The system was inspected for robustness with realistic noise variations, demonstrating that it has the potential to detect a large majority of errors in real-time and indicate the error source. J. V. Siebers receives funding support from Varian Medical Systems.« less

  12. Partial least square method for modelling ergonomic risks factors on express bus accidents in the east coast of peninsular west Malaysia

    NASA Astrophysics Data System (ADS)

    Hashim, Yusof bin; Taha, Zahari bin

    2015-02-01

    Public, stake holders and authorities in Malaysian government show great concern towards high numbers of passenger's injuries and passengers fatalities in express bus accident. This paper studies the underlying factors involved in determining ergonomics risk factors towards human error as the reasons in express bus accidents in order to develop an integrated analytical framework. Reliable information about drivers towards bus accident should lead to the design of strategies intended to make the public feel safe in public transport services. In addition there is an analysis of ergonomics risk factors to determine highly ergonomic risk factors which led to accidents. The research was performed in east coast of peninsular Malaysia using variance-based structural equation modeling namely the Partial Least Squares (PLS) regression techniques. A questionnaire survey was carried out at random among 65 express bus drivers operating from the city of Kuantan in Pahang and among 49 express bus drivers operating from the city of Kuala Terengganu in Terengganu to all towns in the east coast of peninsular west Malaysia. The ergonomic risks factors questionnaire is based on demographic information, occupational information, organizational safety climate, ergonomic workplace, physiological factors, stress at workplace, physical fatigue and near miss accidents. The correlation and significant values between latent constructs (near miss accident) were analyzed using SEM SmartPLS, 3M. The finding shows that the correlated ergonomic risks factors (occupational information, t=2.04, stress at workplace, t = 2.81, physiological factor, t=2.08) are significant to physical fatigue and as the mediator to near miss accident at t = 2.14 at p<0.05and T-statistics, t>1.96. The results shows that the effects of physical fatigue due to ergonomic risks factors influence the human error as the reasons in express bus accidents.

  13. Partial least square method for modelling ergonomic risks factors on express bus accidents in the east coast of peninsular west Malaysia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hashim, Yusof bin; Taha, Zahari bin

    Public, stake holders and authorities in Malaysian government show great concern towards high numbers of passenger’s injuries and passengers fatalities in express bus accident. This paper studies the underlying factors involved in determining ergonomics risk factors towards human error as the reasons in express bus accidents in order to develop an integrated analytical framework. Reliable information about drivers towards bus accident should lead to the design of strategies intended to make the public feel safe in public transport services. In addition there is an analysis of ergonomics risk factors to determine highly ergonomic risk factors which led to accidents. Themore » research was performed in east coast of peninsular Malaysia using variance-based structural equation modeling namely the Partial Least Squares (PLS) regression techniques. A questionnaire survey was carried out at random among 65 express bus drivers operating from the city of Kuantan in Pahang and among 49 express bus drivers operating from the city of Kuala Terengganu in Terengganu to all towns in the east coast of peninsular west Malaysia. The ergonomic risks factors questionnaire is based on demographic information, occupational information, organizational safety climate, ergonomic workplace, physiological factors, stress at workplace, physical fatigue and near miss accidents. The correlation and significant values between latent constructs (near miss accident) were analyzed using SEM SmartPLS, 3M. The finding shows that the correlated ergonomic risks factors (occupational information, t=2.04, stress at workplace, t = 2.81, physiological factor, t=2.08) are significant to physical fatigue and as the mediator to near miss accident at t = 2.14 at p<0.05and T-statistics, t>1.96. The results shows that the effects of physical fatigue due to ergonomic risks factors influence the human error as the reasons in express bus accidents.« less

  14. Does the A-not-B error in adult pet dogs indicate sensitivity to human communication?

    PubMed

    Kis, Anna; Topál, József; Gácsi, Márta; Range, Friederike; Huber, Ludwig; Miklósi, Adám; Virányi, Zsófia

    2012-07-01

    Recent dog-infant comparisons have indicated that the experimenter's communicative signals in object hide-and-search tasks increase the probability of perseverative (A-not-B) errors in both species (Topál et al. 2009). These behaviourally similar results, however, might reflect different mechanisms in dogs and in children. Similar errors may occur if the motor response of retrieving the object during the A trials cannot be inhibited in the B trials or if the experimenter's movements and signals toward the A hiding place in the B trials ('sham-baiting') distract the dogs' attention. In order to test these hypotheses, we tested dogs similarly to Topál et al. (2009) but eliminated the motor search in the A trials and 'sham-baiting' in the B trials. We found that neither an inability to inhibit previously rewarded motor response nor insufficiencies in their working memory and/or attention skills can explain dogs' erroneous choices. Further, we replicated the finding that dogs have a strong tendency to commit the A-not-B error after ostensive-communicative hiding and demonstrated the crucial effect of socio-communicative cues as the A-not-B error diminishes when location B is ostensively enhanced. These findings further support the hypothesis that the dogs' A-not-B error may reflect a special sensitivity to human communicative cues. Such object-hiding and search tasks provide a typical case for how susceptibility to human social signals could (mis)lead domestic dogs.

  15. A short-range optical wireless transmission method based on LED

    NASA Astrophysics Data System (ADS)

    Miao, Meiyuan; Chen, Ailin; Zhu, Mingxing; Li, Ping; Gao, Yingming; Zou, Nianyu

    2016-10-01

    As to electromagnetic wave interfere and only one to one transmission problem of Bluetooth, a short-range LED optical wireless transmission method is proposed to be complementary technology in this paper. Furthermore achieved image transmission through this method. The system makes C52 to be the mater controller, transmitter got data from terminals by USB and sends modulated signals with LED. Optical signal is detected by PD, through amplified, filtered with shaping wave from, and demodulated on receiver. Then send to terminals like PC and reverted back to original image. Analysis the performance from peak power and average power, power consumption of transmitter, relationship of bit error rate and modulation mode, and influence of ambient light, respectively. The results shows that image can be received accurately which uses this method. The most distant transmission distance can get to 1m with transmitter LED source of 1w, and the transfer rate is 14.4Kbit/s with OOK modulation mode on stabilization system, the ambient light effect little to LED transmission system in normal light environment. The method is a convenient to carry LED wireless short range transmission for mobile transmission equipment as a supplement of Bluetooth short-range transmission for its ISM band interfere, and the analysis method in this paper can be a reference for other similar systems. It also proves the system is feasibility for next study.

  16. Augmenting intracortical brain-machine interface with neurally driven error detectors

    NASA Astrophysics Data System (ADS)

    Even-Chen, Nir; Stavisky, Sergey D.; Kao, Jonathan C.; Ryu, Stephen I.; Shenoy, Krishna V.

    2017-12-01

    Objective. Making mistakes is inevitable, but identifying them allows us to correct or adapt our behavior to improve future performance. Current brain-machine interfaces (BMIs) make errors that need to be explicitly corrected by the user, thereby consuming time and thus hindering performance. We hypothesized that neural correlates of the user perceiving the mistake could be used by the BMI to automatically correct errors. However, it was unknown whether intracortical outcome error signals were present in the premotor and primary motor cortices, brain regions successfully used for intracortical BMIs. Approach. We report here for the first time a putative outcome error signal in spiking activity within these cortices when rhesus macaques performed an intracortical BMI computer cursor task. Main results. We decoded BMI trial outcomes shortly after and even before a trial ended with 96% and 84% accuracy, respectively. This led us to develop and implement in real-time a first-of-its-kind intracortical BMI error ‘detect-and-act’ system that attempts to automatically ‘undo’ or ‘prevent’ mistakes. The detect-and-act system works independently and in parallel to a kinematic BMI decoder. In a challenging task that resulted in substantial errors, this approach improved the performance of a BMI employing two variants of the ubiquitous Kalman velocity filter, including a state-of-the-art decoder (ReFIT-KF). Significance. Detecting errors in real-time from the same brain regions that are commonly used to control BMIs should improve the clinical viability of BMIs aimed at restoring motor function to people with paralysis.

  17. Systematic Evaluation of Wajima Superposition (Steady-State Concentration to Mean Residence Time) in the Estimation of Human Intravenous Pharmacokinetic Profile.

    PubMed

    Lombardo, Franco; Berellini, Giuliano; Labonte, Laura R; Liang, Guiqing; Kim, Sean

    2016-03-01

    We present a systematic evaluation of the Wajima superpositioning method to estimate the human intravenous (i.v.) pharmacokinetic (PK) profile based on a set of 54 marketed drugs with diverse structure and range of physicochemical properties. We illustrate the use of average of "best methods" for the prediction of clearance (CL) and volume of distribution at steady state (VDss) as described in our earlier work (Lombardo F, Waters NJ, Argikar UA, et al. J Clin Pharmacol. 2013;53(2):178-191; Lombardo F, Waters NJ, Argikar UA, et al. J Clin Pharmacol. 2013;53(2):167-177). These methods provided much more accurate prediction of human PK parameters, yielding 88% and 70% of the prediction within 2-fold error for VDss and CL, respectively. The prediction of human i.v. profile using Wajima superpositioning of rat, dog, and monkey time-concentration profiles was tested against the observed human i.v. PK using fold error statistics. The results showed that 63% of the compounds yielded a geometric mean of fold error below 2-fold, and an additional 19% yielded a geometric mean of fold error between 2- and 3-fold, leaving only 18% of the compounds with a relatively poor prediction. Our results showed that good superposition was observed in any case, demonstrating the predictive value of the Wajima approach, and that the cause of poor prediction of human i.v. profile was mainly due to the poorly predicted CL value, while VDss prediction had a minor impact on the accuracy of human i.v. profile prediction. Copyright © 2016. Published by Elsevier Inc.

  18. Error rates in forensic DNA analysis: definition, numbers, impact and communication.

    PubMed

    Kloosterman, Ate; Sjerps, Marjan; Quak, Astrid

    2014-09-01

    Forensic DNA casework is currently regarded as one of the most important types of forensic evidence, and important decisions in intelligence and justice are based on it. However, errors occasionally occur and may have very serious consequences. In other domains, error rates have been defined and published. The forensic domain is lagging behind concerning this transparency for various reasons. In this paper we provide definitions and observed frequencies for different types of errors at the Human Biological Traces Department of the Netherlands Forensic Institute (NFI) over the years 2008-2012. Furthermore, we assess their actual and potential impact and describe how the NFI deals with the communication of these numbers to the legal justice system. We conclude that the observed relative frequency of quality failures is comparable to studies from clinical laboratories and genetic testing centres. Furthermore, this frequency is constant over the five-year study period. The most common causes of failures related to the laboratory process were contamination and human error. Most human errors could be corrected, whereas gross contamination in crime samples often resulted in irreversible consequences. Hence this type of contamination is identified as the most significant source of error. Of the known contamination incidents, most were detected by the NFI quality control system before the report was issued to the authorities, and thus did not lead to flawed decisions like false convictions. However in a very limited number of cases crucial errors were detected after the report was issued, sometimes with severe consequences. Many of these errors were made in the post-analytical phase. The error rates reported in this paper are useful for quality improvement and benchmarking, and contribute to an open research culture that promotes public trust. However, they are irrelevant in the context of a particular case. Here case-specific probabilities of undetected errors are needed. These should be reported, separately from the match probability, when requested by the court or when there are internal or external indications for error. It should also be made clear that there are various other issues to consider, like DNA transfer. Forensic statistical models, in particular Bayesian networks, may be useful to take the various uncertainties into account and demonstrate their effects on the evidential value of the forensic DNA results. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  19. Implantable wireless powered light emitting diode (LED) for near-infrared photoimmunotherapy: device development and experimental assessment in vitro and in vivo.

    PubMed

    Nakajima, Kohei; Kimura, Toshihiro; Takakura, Hideo; Yoshikawa, Yasuo; Kameda, Atsushi; Shindo, Takayuki; Sato, Kazuhide; Kobayashi, Hisataka; Ogawa, Mikako

    2018-04-13

    The aim of this study was to develop and assess a novel implantable, wireless-powered, light-emitting diode (LED) for near-infrared photoimmunotherapy (NIR-PIT). NIR-PIT is a recently developed cancer therapy that uses NIR light and antibody-photosensitizer conjugates and is able to induce cancer-specific cell death. Due to limited light penetration depth it is currently unable to treat tumors in deep tissues. Use of implanted LED might potentially overcome this limitation. The wireless LED system was able to emit NIR light up to a distance of 20 cm from the transmitter coil by using low magnetic fields as compliant with limits for use in humans. Results indicated that the LED system was able to kill tumor cells in vitro and to suppress tumor growth in implanted tumor-bearing mice. Results indicated that the proposed implantable wireless LED system was able to suppress tumor growth in vivo . These results are encouraging as wireless LED systems such as the one here developed might be a possible solution to treat tumors in deep regions in humans. Further research in this area would be important. An implantable LED system was developed. It consisted of a LED capsule including two LED sources and a receiver coil coupled with an external coil and power source. Wireless power transmission was guaranteed by using electromagnetic induction. The system was tested in vitro by using EGFR-expressing cells and HER2-expressing cells. The system was also tested in vivo in tumor-bearing mice.

  20. The activation of directional stem cell motility by green light-emitting diode irradiation.

    PubMed

    Ong, Wei-Kee; Chen, How-Foo; Tsai, Cheng-Ting; Fu, Yun-Ju; Wong, Yi-Shan; Yen, Da-Jen; Chang, Tzu-Hao; Huang, Hsien-Da; Lee, Oscar Kuang-Sheng; Chien, Shu; Ho, Jennifer Hui-Chun

    2013-03-01

    Light-emitting diode (LED) irradiation is potentially a photostimulator to manipulate cell behavior by opsin-triggered phototransduction and thermal energy supply in living cells. Directional stem cell motility is critical for the efficiency and specificity of stem cells in tissue repair. We explored that green LED (530 nm) irradiation directed the human orbital fat stem cells (OFSCs) to migrate away from the LED light source through activation of extracellular signal-regulated kinases (ERK)/MAP kinase/p38 signaling pathway. ERK inhibitor selectively abrogated light-driven OFSC migration. Phosphorylation of these kinases as well as green LED irradiation-induced cell migration was facilitated by increasing adenosine triphosphate (ATP) production in OFSCs after green LED exposure, and which was thermal stress-independent mechanism. OFSCs, which are multi-potent mesenchymal stem cells isolated from human orbital fat tissue, constitutionally express three opsins, i.e. retinal pigment epithelium-derived rhodopsin homolog (RRH), encephalopsin (OPN3) and short-wave-sensitive opsin 1 (OPN1SW). However, only two non-visual opsins, i.e. RRH and OPN3, served as photoreceptors response to green LED irradiation-induced OFSC migration. In conclusion, stem cells are sensitive to green LED irradiation-induced directional cell migration through activation of ERK signaling pathway via a wavelength-dependent phototransduction. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. Impact of pharmacy technician-centered medication reconciliation on optimization of antiretroviral therapy and opportunistic infection prophylaxis in hospitalized patients with HIV/AIDS.

    PubMed

    Siemianowski, Laura A; Sen, Sanchita; George, Jomy M

    2013-08-01

    This study aimed to examine the role of a pharmacy technician-centered medication reconciliation (PTMR) program in optimization of medication therapy in hospitalized patients with HIV/AIDS. A chart review was conducted for all inpatients that had a medication reconciliation performed by the PTMR program. Adult patients with HIV and antiretroviral therapy (ART) and/or the opportunistic infection (OI) prophylaxis listed on the medication reconciliation form were included. The primary objective is to describe the (1) number and types of medication errors and (2) the percentage of patients who received appropriate ART. The secondary objective is a comparison of the number of medication errors between standard mediation reconciliation and a pharmacy-led program. In the PTMR period, 55 admissions were evaluated. In all, 50% of the patients received appropriate ART. In 27of the 55 admissions, there were 49 combined ART and OI-related errors. The most common ART-related errors were drug-drug interactions. The incidence of ART-related medication errors that included drug-drug interactions and renal dosing adjustments were similar between the pre-PTMR and PTMR groups (P = .0868). Of the 49 errors in the PTMR group, 18 were intervened by a medication reconciliation pharmacist. A PTMR program has a positive impact on optimizing ART and OI prophylaxis in patients with HIV/AIDS.

  2. A Neurobehavioral Mechanism Linking Behaviorally Inhibited Temperament and Later Adolescent Social Anxiety.

    PubMed

    Buzzell, George A; Troller-Renfree, Sonya V; Barker, Tyson V; Bowman, Lindsay C; Chronis-Tuscano, Andrea; Henderson, Heather A; Kagan, Jerome; Pine, Daniel S; Fox, Nathan A

    2017-12-01

    Behavioral inhibition (BI) is a temperament identified in early childhood that is a risk factor for later social anxiety. However, mechanisms underlying the development of social anxiety remain unclear. To better understand the emergence of social anxiety, longitudinal studies investigating changes at behavioral neural levels are needed. BI was assessed in the laboratory at 2 and 3 years of age (N = 268). Children returned at 12 years, and an electroencephalogram was recorded while children performed a flanker task under 2 conditions: once while believing they were being observed by peers and once while not being observed. This methodology isolated changes in error monitoring (error-related negativity) and behavior (post-error reaction time slowing) as a function of social context. At 12 years, current social anxiety symptoms and lifetime diagnoses of social anxiety were obtained. Childhood BI prospectively predicted social-specific error-related negativity increases and social anxiety symptoms in adolescence; these symptoms directly related to clinical diagnoses. Serial mediation analysis showed that social error-related negativity changes explained relations between BI and social anxiety symptoms (n = 107) and diagnosis (n = 92), but only insofar as social context also led to increased post-error reaction time slowing (a measure of error preoccupation); this model was not significantly related to generalized anxiety. Results extend prior work on socially induced changes in error monitoring and error preoccupation. These measures could index a neurobehavioral mechanism linking BI to adolescent social anxiety symptoms and diagnosis. This mechanism could relate more strongly to social than to generalized anxiety in the peri-adolescent period. Copyright © 2017 American Academy of Child and Adolescent Psychiatry. All rights reserved.

  3. Human-robot cooperative movement training: learning a novel sensory motor transformation during walking with robotic assistance-as-needed.

    PubMed

    Emken, Jeremy L; Benitez, Raul; Reinkensmeyer, David J

    2007-03-28

    A prevailing paradigm of physical rehabilitation following neurologic injury is to "assist-as-needed" in completing desired movements. Several research groups are attempting to automate this principle with robotic movement training devices and patient cooperative algorithms that encourage voluntary participation. These attempts are currently not based on computational models of motor learning. Here we assume that motor recovery from a neurologic injury can be modelled as a process of learning a novel sensory motor transformation, which allows us to study a simplified experimental protocol amenable to mathematical description. Specifically, we use a robotic force field paradigm to impose a virtual impairment on the left leg of unimpaired subjects walking on a treadmill. We then derive an "assist-as-needed" robotic training algorithm to help subjects overcome the virtual impairment and walk normally. The problem is posed as an optimization of performance error and robotic assistance. The optimal robotic movement trainer becomes an error-based controller with a forgetting factor that bounds kinematic errors while systematically reducing its assistance when those errors are small. As humans have a natural range of movement variability, we introduce an error weighting function that causes the robotic trainer to disregard this variability. We experimentally validated the controller with ten unimpaired subjects by demonstrating how it helped the subjects learn the novel sensory motor transformation necessary to counteract the virtual impairment, while also preventing them from experiencing large kinematic errors. The addition of the error weighting function allowed the robot assistance to fade to zero even though the subjects' movements were variable. We also show that in order to assist-as-needed, the robot must relax its assistance at a rate faster than that of the learning human. The assist-as-needed algorithm proposed here can limit error during the learning of a dynamic motor task. The algorithm encourages learning by decreasing its assistance as a function of the ongoing progression of movement error. This type of algorithm is well suited for helping people learn dynamic tasks for which large kinematic errors are dangerous or discouraging, and thus may prove useful for robot-assisted movement training of walking or reaching following neurologic injury.

  4. Previous Estimates of Mitochondrial DNA Mutation Level Variance Did Not Account for Sampling Error: Comparing the mtDNA Genetic Bottleneck in Mice and Humans

    PubMed Central

    Wonnapinij, Passorn; Chinnery, Patrick F.; Samuels, David C.

    2010-01-01

    In cases of inherited pathogenic mitochondrial DNA (mtDNA) mutations, a mother and her offspring generally have large and seemingly random differences in the amount of mutated mtDNA that they carry. Comparisons of measured mtDNA mutation level variance values have become an important issue in determining the mechanisms that cause these large random shifts in mutation level. These variance measurements have been made with samples of quite modest size, which should be a source of concern because higher-order statistics, such as variance, are poorly estimated from small sample sizes. We have developed an analysis of the standard error of variance from a sample of size n, and we have defined error bars for variance measurements based on this standard error. We calculate variance error bars for several published sets of measurements of mtDNA mutation level variance and show how the addition of the error bars alters the interpretation of these experimental results. We compare variance measurements from human clinical data and from mouse models and show that the mutation level variance is clearly higher in the human data than it is in the mouse models at both the primary oocyte and offspring stages of inheritance. We discuss how the standard error of variance can be used in the design of experiments measuring mtDNA mutation level variance. Our results show that variance measurements based on fewer than 20 measurements are generally unreliable and ideally more than 50 measurements are required to reliably compare variances with less than a 2-fold difference. PMID:20362273

  5. New paradigm for understanding in-flight decision making errors: a neurophysiological model leveraging human factors.

    PubMed

    Souvestre, P A; Landrock, C K; Blaber, A P

    2008-08-01

    Human factors centered aviation accident analyses report that skill based errors are known to be cause of 80% of all accidents, decision making related errors 30% and perceptual errors 6%1. In-flight decision making error is a long time recognized major avenue leading to incidents and accidents. Through the past three decades, tremendous and costly efforts have been developed to attempt to clarify causation, roles and responsibility as well as to elaborate various preventative and curative countermeasures blending state of the art biomedical, technological advances and psychophysiological training strategies. In-flight related statistics have not been shown significantly changed and a significant number of issues remain not yet resolved. Fine Postural System and its corollary, Postural Deficiency Syndrome (PDS), both defined in the 1980's, are respectively neurophysiological and medical diagnostic models that reflect central neural sensory-motor and cognitive controls regulatory status. They are successfully used in complex neurotraumatology and related rehabilitation for over two decades. Analysis of clinical data taken over a ten-year period from acute and chronic post-traumatic PDS patients shows a strong correlation between symptoms commonly exhibited before, along side, or even after error, and sensory-motor or PDS related symptoms. Examples are given on how PDS related central sensory-motor control dysfunction can be correctly identified and monitored via a neurophysiological ocular-vestibular-postural monitoring system. The data presented provides strong evidence that a specific biomedical assessment methodology can lead to a better understanding of in-flight adaptive neurophysiological, cognitive and perceptual dysfunctional status that could induce in flight-errors. How relevant human factors can be identified and leveraged to maintain optimal performance will be addressed.

  6. Development and validation of Aviation Causal Contributors for Error Reporting Systems (ACCERS).

    PubMed

    Baker, David P; Krokos, Kelley J

    2007-04-01

    This investigation sought to develop a reliable and valid classification system for identifying and classifying the underlying causes of pilot errors reported under the Aviation Safety Action Program (ASAP). ASAP is a voluntary safety program that air carriers may establish to study pilot and crew performance on the line. In ASAP programs, similar to the Aviation Safety Reporting System, pilots self-report incidents by filing a short text description of the event. The identification of contributors to errors is critical if organizations are to improve human performance, yet it is difficult for analysts to extract this information from text narratives. A taxonomy was needed that could be used by pilots to classify the causes of errors. After completing a thorough literature review, pilot interviews and a card-sorting task were conducted in Studies 1 and 2 to develop the initial structure of the Aviation Causal Contributors for Event Reporting Systems (ACCERS) taxonomy. The reliability and utility of ACCERS was then tested in studies 3a and 3b by having pilots independently classify the primary and secondary causes of ASAP reports. The results provided initial evidence for the internal and external validity of ACCERS. Pilots were found to demonstrate adequate levels of agreement with respect to their category classifications. ACCERS appears to be a useful system for studying human error captured under pilot ASAP reports. Future work should focus on how ACCERS is organized and whether it can be used or modified to classify human error in ASAP programs for other aviation-related job categories such as dispatchers. Potential applications of this research include systems in which individuals self-report errors and that attempt to extract and classify the causes of those events.

  7. Systematic analysis of video data from different human–robot interaction studies: a categorization of social signals during error situations

    PubMed Central

    Giuliani, Manuel; Mirnig, Nicole; Stollnberger, Gerald; Stadler, Susanne; Buchner, Roland; Tscheligi, Manfred

    2015-01-01

    Human–robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human–robot interaction experiments. For that, we analyzed 201 videos of five human–robot interaction user studies with varying tasks from four independent projects. The analysis shows that there are two types of error situations: social norm violations and technical failures. Social norm violations are situations in which the robot does not adhere to the underlying social script of the interaction. Technical failures are caused by technical shortcomings of the robot. The results of the video analysis show that the study participants use many head movements and very few gestures, but they often smile, when in an error situation with the robot. Another result is that the participants sometimes stop moving at the beginning of error situations. We also found that the participants talked more in the case of social norm violations and less during technical failures. Finally, the participants use fewer non-verbal social signals (for example smiling, nodding, and head shaking), when they are interacting with the robot alone and no experimenter or other human is present. The results suggest that participants do not see the robot as a social interaction partner with comparable communication skills. Our findings have implications for builders and evaluators of human–robot interaction systems. The builders need to consider including modules for recognition and classification of head movements to the robot input channels. The evaluators need to make sure that the presence of an experimenter does not skew the results of their user studies. PMID:26217266

  8. The Human Tissue Act 2004 and the child donor.

    PubMed

    Baston, Jenny

    2009-05-01

    In 2001, the inquiry panel appointed to investigate the removal, retention and disposal of human organs and tissues at the Royal Liverpool Children's Hospital published its report. The panel's recommendations led to a new approach to consent for organ removal and storage under the new Human Tissue Act 2004. For child bone marrow donors, the new consent process requires all donor children or their parent to undergo a separate assessment before the bone marrow donation. They must be assessed by an accredited assessor who will submit a recommendation to the Human Tissue Authority for consideration. The unfortunate circumstances highlighted in the inquiry have led to changes to law, practice and culture that are benefiting other children and families.

  9. Evaluating flow cytometer performance with weighted quadratic least squares analysis of LED and multi-level bead data

    PubMed Central

    Parks, David R.; Khettabi, Faysal El; Chase, Eric; Hoffman, Robert A.; Perfetto, Stephen P.; Spidlen, Josef; Wood, James C.S.; Moore, Wayne A.; Brinkman, Ryan R.

    2017-01-01

    We developed a fully automated procedure for analyzing data from LED pulses and multi-level bead sets to evaluate backgrounds and photoelectron scales of cytometer fluorescence channels. The method improves on previous formulations by fitting a full quadratic model with appropriate weighting and by providing standard errors and peak residuals as well as the fitted parameters themselves. Here we describe the details of the methods and procedures involved and present a set of illustrations and test cases that demonstrate the consistency and reliability of the results. The automated analysis and fitting procedure is generally quite successful in providing good estimates of the Spe (statistical photoelectron) scales and backgrounds for all of the fluorescence channels on instruments with good linearity. The precision of the results obtained from LED data is almost always better than for multi-level bead data, but the bead procedure is easy to carry out and provides results good enough for most purposes. Including standard errors on the fitted parameters is important for understanding the uncertainty in the values of interest. The weighted residuals give information about how well the data fits the model, and particularly high residuals indicate bad data points. Known photoelectron scales and measurement channel backgrounds make it possible to estimate the precision of measurements at different signal levels and the effects of compensated spectral overlap on measurement quality. Combining this information with measurements of standard samples carrying dyes of biological interest, we can make accurate comparisons of dye sensitivity among different instruments. Our method is freely available through the R/Bioconductor package flowQB. PMID:28160404

  10. Computational diffraction tomographic microscopy with transport of intensity equation using a light-emitting diode array

    NASA Astrophysics Data System (ADS)

    Li, Jiaji; Chen, Qian; Zhang, Jialin; Zuo, Chao

    2017-10-01

    Optical diffraction tomography (ODT) is an effective label-free technique for quantitatively refractive index imaging, which enables long-term monitoring of the internal three-dimensional (3D) structures and molecular composition of biological cells with minimal perturbation. However, existing optical tomographic methods generally rely on interferometric configuration for phase measurement and sophisticated mechanical systems for sample rotation or beam scanning. Thereby, the measurement is suspect to phase error coming from the coherent speckle, environmental vibrations, and mechanical error during data acquisition process. To overcome these limitations, we present a new ODT technique based on non-interferometric phase retrieval and programmable illumination emitting from a light-emitting diode (LED) array. The experimental system is built based on a traditional bright field microscope, with the light source replaced by a programmable LED array, which provides angle-variable quasi-monochromatic illumination with an angular coverage of +/-37 degrees in both x and y directions (corresponding to an illumination numerical aperture of ˜ 0.6). Transport of intensity equation (TIE) is utilized to recover the phase at different illumination angles, and the refractive index distribution is reconstructed based on the ODT framework under first Rytov approximation. The missing-cone problem in ODT is addressed by using the iterative non-negative constraint algorithm, and the misalignment of the LED array is further numerically corrected to improve the accuracy of refractive index quantification. Experiments on polystyrene beads and thick biological specimens show that the proposed approach allows accurate refractive index reconstruction while greatly reduced the system complexity and environmental sensitivity compared to conventional interferometric ODT approaches.

  11. Optical diffraction tomography microscopy with transport of intensity equation using a light-emitting diode array

    NASA Astrophysics Data System (ADS)

    Li, Jiaji; Chen, Qian; Zhang, Jialin; Zhang, Zhao; Zhang, Yan; Zuo, Chao

    2017-08-01

    Optical diffraction tomography (ODT) is an effective label-free technique for quantitatively refractive index imaging, which enables long-term monitoring of the internal three-dimensional (3D) structures and molecular composition of biological cells with minimal perturbation. However, existing optical tomographic methods generally rely on interferometric configuration for phase measurement and sophisticated mechanical systems for sample rotation or beam scanning. Thereby, the measurement is suspect to phase error coming from the coherent speckle, environmental vibrations, and mechanical error during data acquisition process. To overcome these limitations, we present a new ODT technique based on non-interferometric phase retrieval and programmable illumination emitting from a light-emitting diode (LED) array. The experimental system is built based on a traditional bright field microscope, with the light source replaced by a programmable LED array, which provides angle-variable quasi-monochromatic illumination with an angular coverage of ±37 degrees in both x and y directions (corresponding to an illumination numerical aperture of ∼0.6). Transport of intensity equation (TIE) is utilized to recover the phase at different illumination angles, and the refractive index distribution is reconstructed based on the ODT framework under first Rytov approximation. The missing-cone problem in ODT is addressed by using the iterative non-negative constraint algorithm, and the misalignment of the LED array is further numerically corrected to improve the accuracy of refractive index quantification. Experiments on polystyrene beads and thick biological specimens show that the proposed approach allows accurate refractive index reconstruction while greatly reduced the system complexity and environmental sensitivity compared to conventional interferometric ODT approaches.

  12. Patient Safety in Medication Nomenclature: Orthographic and Semantic Properties of International Nonproprietary Names

    PubMed Central

    Bryan, Rachel; Aronson, Jeffrey K.; ten Hacken, Pius; Williams, Alison; Jordan, Sue

    2015-01-01

    Background Confusion between look-alike and sound-alike (LASA) medication names (such as mercaptamine and mercaptopurine) accounts for up to one in four medication errors, threatening patient safety. Error reduction strategies include computerized physician order entry interventions, and ‘Tall Man’ lettering. The purpose of this study is to explore the medication name designation process, to elucidate properties that may prime the risk of confusion. Methods and Findings We analysed the formal and semantic properties of 7,987 International Non-proprietary Names (INNs), in relation to naming guidelines of the World Health Organization (WHO) INN programme, and have identified potential for errors. We explored: their linguistic properties, the underlying taxonomy of stems to indicate pharmacological interrelationships, and similarities between INNs. We used Microsoft Excel for analysis, including calculation of Levenshtein edit distance (LED). Compliance with WHO naming guidelines was inconsistent. Since the 1970s there has been a trend towards compliance in formal properties, such as word length, but longer names published in the 1950s and 1960s are still in use. The stems used to show pharmacological interrelationships are not spelled consistently and the guidelines do not impose an unequivocal order on them, making the meanings of INNs difficult to understand. Pairs of INNs sharing a stem (appropriately or not) often have high levels of similarity (<5 LED), and thus have greater potential for confusion. Conclusions We have revealed a tension between WHO guidelines stipulating use of stems to denote meaning, and the aim of reducing similarities in nomenclature. To mitigate this tension and reduce the risk of confusion, the stem system should be made clear and well ordered, so as to avoid compounding the risk of confusion at the clinical level. The interplay between the different WHO INN naming principles should be further examined, to better understand their implications for the problem of LASA errors. PMID:26701761

  13. Patient Safety in Medication Nomenclature: Orthographic and Semantic Properties of International Nonproprietary Names.

    PubMed

    Bryan, Rachel; Aronson, Jeffrey K; ten Hacken, Pius; Williams, Alison; Jordan, Sue

    2015-01-01

    Confusion between look-alike and sound-alike (LASA) medication names (such as mercaptamine and mercaptopurine) accounts for up to one in four medication errors, threatening patient safety. Error reduction strategies include computerized physician order entry interventions, and 'Tall Man' lettering. The purpose of this study is to explore the medication name designation process, to elucidate properties that may prime the risk of confusion. We analysed the formal and semantic properties of 7,987 International Non-proprietary Names (INNs), in relation to naming guidelines of the World Health Organization (WHO) INN programme, and have identified potential for errors. We explored: their linguistic properties, the underlying taxonomy of stems to indicate pharmacological interrelationships, and similarities between INNs. We used Microsoft Excel for analysis, including calculation of Levenshtein edit distance (LED). Compliance with WHO naming guidelines was inconsistent. Since the 1970s there has been a trend towards compliance in formal properties, such as word length, but longer names published in the 1950s and 1960s are still in use. The stems used to show pharmacological interrelationships are not spelled consistently and the guidelines do not impose an unequivocal order on them, making the meanings of INNs difficult to understand. Pairs of INNs sharing a stem (appropriately or not) often have high levels of similarity (<5 LED), and thus have greater potential for confusion. We have revealed a tension between WHO guidelines stipulating use of stems to denote meaning, and the aim of reducing similarities in nomenclature. To mitigate this tension and reduce the risk of confusion, the stem system should be made clear and well ordered, so as to avoid compounding the risk of confusion at the clinical level. The interplay between the different WHO INN naming principles should be further examined, to better understand their implications for the problem of LASA errors.

  14. Evaluation of OLED and edge-lit LED lighting panels

    NASA Astrophysics Data System (ADS)

    Mou, Xi; Narendran, Nadarajah; Zhu, Yiting; Freyssinier, Jean Paul

    2016-09-01

    Solid-state lighting (SSL) offers a new technology platform for lighting designers and end-users to illuminate spaces with low energy demand. Two types of SSL sources include organic light-emitting diodes (OLEDs) and light-emitting diodes (LEDs). OLED is an area light source, and its primary competing technology is the edge-lit LED panel. Generally, both of these technologies are considered similar in shape and appearance, but there is little understanding of how people perceive discomfort glare from large area light sources. The objective of this study was to evaluate discomfort glare for the two lighting technologies under similar operating conditions by gathering observers' reactions. The human factors study results showed no statistically significant difference in human response to discomfort glare between OLED and edge-lit LED panels when the two light sources produced the same lighting stimulus. This means both technologies appeared equally glary beyond a certain luminance.

  15. Neural Correlates of User-initiated Motor Success and Failure - A Brain-Computer Interface Perspective.

    PubMed

    Yazmir, Boris; Reiner, Miriam

    2018-05-15

    Any motor action is, by nature, potentially accompanied by human errors. In order to facilitate development of error-tailored Brain-Computer Interface (BCI) correction systems, we focused on internal, human-initiated errors, and investigated EEG correlates of user outcome successes and errors during a continuous 3D virtual tennis game against a computer player. We used a multisensory, 3D, highly immersive environment. Missing and repelling the tennis ball were considered, as 'error' (miss) and 'success' (repel). Unlike most previous studies, where the environment "encouraged" the participant to perform a mistake, here errors happened naturally, resulting from motor-perceptual-cognitive processes of incorrect estimation of the ball kinematics, and can be regarded as user internal, self-initiated errors. Results show distinct and well-defined Event-Related Potentials (ERPs), embedded in the ongoing EEG, that differ across conditions by waveforms, scalp signal distribution maps, source estimation results (sLORETA) and time-frequency patterns, establishing a series of typical features that allow valid discrimination between user internal outcome success and error. The significant delay in latency between positive peaks of error- and success-related ERPs, suggests a cross-talk between top-down and bottom-up processing, represented by an outcome recognition process, in the context of the game world. Success-related ERPs had a central scalp distribution, while error-related ERPs were centro-parietal. The unique characteristics and sharp differences between EEG correlates of error/success provide the crucial components for an improved BCI system. The features of the EEG waveform can be used to detect user action outcome, to be fed into the BCI correction system. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.

  16. Injection Molding Parameters Calculations by Using Visual Basic (VB) Programming

    NASA Astrophysics Data System (ADS)

    Tony, B. Jain A. R.; Karthikeyen, S.; Alex, B. Jeslin A. R.; Hasan, Z. Jahid Ali

    2018-03-01

    Now a day’s manufacturing industry plays a vital role in production sectors. To fabricate a component lot of design calculation has to be done. There is a chance of human errors occurs during design calculations. The aim of this project is to create a special module using visual basic (VB) programming to calculate injection molding parameters to avoid human errors. To create an injection mold for a spur gear component the following parameters have to be calculated such as Cooling Capacity, Cooling Channel Diameter, and Cooling Channel Length, Runner Length and Runner Diameter, Gate Diameter and Gate Pressure. To calculate the above injection molding parameters a separate module has been created using Visual Basic (VB) Programming to reduce the human errors. The outcome of the module dimensions is the injection molding components such as mold cavity and core design, ejector plate design.

  17. Human error mitigation initiative (HEMI) : summary report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, Susan M.; Ramos, M. Victoria; Wenner, Caren A.

    2004-11-01

    Despite continuing efforts to apply existing hazard analysis methods and comply with requirements, human errors persist across the nuclear weapons complex. Due to a number of factors, current retroactive and proactive methods to understand and minimize human error are highly subjective, inconsistent in numerous dimensions, and are cumbersome to characterize as thorough. An alternative and proposed method begins with leveraging historical data to understand what the systemic issues are and where resources need to be brought to bear proactively to minimize the risk of future occurrences. An illustrative analysis was performed using existing incident databases specific to Pantex weapons operationsmore » indicating systemic issues associated with operating procedures that undergo notably less development rigor relative to other task elements such as tooling and process flow. Future recommended steps to improve the objectivity, consistency, and thoroughness of hazard analysis and mitigation were delineated.« less

  18. Risk assessment of component failure modes and human errors using a new FMECA approach: application in the safety analysis of HDR brachytherapy.

    PubMed

    Giardina, M; Castiglia, F; Tomarchio, E

    2014-12-01

    Failure mode, effects and criticality analysis (FMECA) is a safety technique extensively used in many different industrial fields to identify and prevent potential failures. In the application of traditional FMECA, the risk priority number (RPN) is determined to rank the failure modes; however, the method has been criticised for having several weaknesses. Moreover, it is unable to adequately deal with human errors or negligence. In this paper, a new versatile fuzzy rule-based assessment model is proposed to evaluate the RPN index to rank both component failure and human error. The proposed methodology is applied to potential radiological over-exposure of patients during high-dose-rate brachytherapy treatments. The critical analysis of the results can provide recommendations and suggestions regarding safety provisions for the equipment and procedures required to reduce the occurrence of accidental events.

  19. NASA light emitting diode medical applications from deep space to deep sea

    NASA Astrophysics Data System (ADS)

    Whelan, Harry T.; Buchmann, Ellen V.; Whelan, Noel T.; Turner, Scott G.; Cevenini, Vita; Stinson, Helen; Ignatius, Ron; Martin, Todd; Cwiklinski, Joan; Meyer, Glenn A.; Hodgson, Brian; Gould, Lisa; Kane, Mary; Chen, Gina; Caviness, James

    2001-02-01

    This work is supported and managed through the NASA Marshall Space Flight Center-SBIR Program. LED-technology developed for NASA plant growth experiments in space shows promise for delivering light deep into tissues of the body to promote wound healing and human tissue growth. We present the results of LED-treatment of cells grown in culture and the effects of LEDs on patients' chronic and acute wounds. LED-technology is also biologically optimal for photodynamic therapy of cancer and we discuss our successes using LEDs in conjunction with light-activated chemotherapeutic drugs. .

  20. Operator performance-enhancing technologies to improve safety. A US DOT safety initiative for meeting the human-centered systems challenge.

    DOT National Transportation Integrated Search

    1999-11-01

    The program implements DOT Human Factors Coordinating Committee (HFCC) recommendations for a coordinated Departmental Human Factors Research Program to advance the human-centered systems approach for enhancing transportation safety. Human error is a ...

  1. The ethics and practical importance of defining, distinguishing and disclosing nursing errors: a discussion paper.

    PubMed

    Johnstone, Megan-Jane; Kanitsaki, Olga

    2006-03-01

    Nurses globally are required and expected to report nursing errors. As is clearly demonstrated in the international literature, fulfilling this requirement is not, however, without risks. In this discussion paper, the notion of 'nursing error', the practical and moral importance of defining, distinguishing and disclosing nursing errors and how a distinct definition of 'nursing error' fits with the new 'system approach' to human-error management in health care are critiqued. Drawing on international literature and two key case exemplars from the USA and Australia, arguments are advanced to support the view that although it is 'right' for nurses to report nursing errors, it will be very difficult for them to do so unless a non-punitive approach to nursing-error management is adopted.

  2. Repeat-aware modeling and correction of short read errors.

    PubMed

    Yang, Xiao; Aluru, Srinivas; Dorman, Karin S

    2011-02-15

    High-throughput short read sequencing is revolutionizing genomics and systems biology research by enabling cost-effective deep coverage sequencing of genomes and transcriptomes. Error detection and correction are crucial to many short read sequencing applications including de novo genome sequencing, genome resequencing, and digital gene expression analysis. Short read error detection is typically carried out by counting the observed frequencies of kmers in reads and validating those with frequencies exceeding a threshold. In case of genomes with high repeat content, an erroneous kmer may be frequently observed if it has few nucleotide differences with valid kmers with multiple occurrences in the genome. Error detection and correction were mostly applied to genomes with low repeat content and this remains a challenging problem for genomes with high repeat content. We develop a statistical model and a computational method for error detection and correction in the presence of genomic repeats. We propose a method to infer genomic frequencies of kmers from their observed frequencies by analyzing the misread relationships among observed kmers. We also propose a method to estimate the threshold useful for validating kmers whose estimated genomic frequency exceeds the threshold. We demonstrate that superior error detection is achieved using these methods. Furthermore, we break away from the common assumption of uniformly distributed errors within a read, and provide a framework to model position-dependent error occurrence frequencies common to many short read platforms. Lastly, we achieve better error correction in genomes with high repeat content. The software is implemented in C++ and is freely available under GNU GPL3 license and Boost Software V1.0 license at "http://aluru-sun.ece.iastate.edu/doku.php?id = redeem". We introduce a statistical framework to model sequencing errors in next-generation reads, which led to promising results in detecting and correcting errors for genomes with high repeat content.

  3. Learning without Borders: A Review of the Implementation of Medical Error Reporting in Médecins Sans Frontières

    PubMed Central

    Shanks, Leslie; Bil, Karla; Fernhout, Jena

    2015-01-01

    Objective To analyse the results from the first 3 years of implementation of a medical error reporting system in Médecins Sans Frontières-Operational Centre Amsterdam (MSF) programs. Methodology A medical error reporting policy was developed with input from frontline workers and introduced to the organisation in June 2010. The definition of medical error used was “the failure of a planned action to be completed as intended or the use of a wrong plan to achieve an aim.” All confirmed error reports were entered into a database without the use of personal identifiers. Results 179 errors were reported from 38 projects in 18 countries over the period of June 2010 to May 2013. The rate of reporting was 31, 42, and 106 incidents/year for reporting year 1, 2 and 3 respectively. The majority of errors were categorized as dispensing errors (62 cases or 34.6%), errors or delays in diagnosis (24 cases or 13.4%) and inappropriate treatment (19 cases or 10.6%). The impact of the error was categorized as no harm (58, 32.4%), harm (70, 39.1%), death (42, 23.5%) and unknown in 9 (5.0%) reports. Disclosure to the patient took place in 34 cases (19.0%), did not take place in 46 (25.7%), was not applicable for 5 (2.8%) cases and not reported for 94 (52.5%). Remedial actions introduced at headquarters level included guideline revisions and changes to medical supply procedures. At field level improvements included increased training and supervision, adjustments in staffing levels, and adaptations to the organization of the pharmacy. Conclusion It was feasible to implement a voluntary reporting system for medical errors despite the complex contexts in which MSF intervenes. The reporting policy led to system changes that improved patient safety and accountability to patients. Challenges remain in achieving widespread acceptance of the policy as evidenced by the low reporting and disclosure rates. PMID:26381622

  4. The error in total error reduction.

    PubMed

    Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R

    2014-02-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modeling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. Clinical decision-making: heuristics and cognitive biases for the ophthalmologist.

    PubMed

    Hussain, Ahsen; Oestreicher, James

    Diagnostic errors have a significant impact on health care outcomes and patient care. The underlying causes and development of diagnostic error are complex with flaws in health care systems, as well as human error, playing a role. Cognitive biases and a failure of decision-making shortcuts (heuristics) are human factors that can compromise the diagnostic process. We describe these mechanisms, their role with the clinician, and provide clinical scenarios to highlight the various points at which biases may emerge. We discuss strategies to modify the development and influence of these processes and the vulnerability of heuristics to provide insight and improve clinical outcomes. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Corrigendum to "Foraging behavior of lactating South American sea lions (Otaria flavescens) and spatial-temporal resource overlap with the Uruguayan fisheries" [Deep-Sea Res. II 88-89 (2013) 106-109

    NASA Astrophysics Data System (ADS)

    Riet-Sapriza, Federico G.; Costa, Daniel P.; Franco-Trecu, Valentina; Marín, Yamandú; Chocca, Julio; González, Bernardo; Beathyate, Gastón; Louise Chilvers, B.; Hückstadt, Luis A.

    2016-10-01

    The authors of Riet-Sapriza et al. (2013) regret that after publication of the original manuscript an error was found in the estimation of lactating South American sea lions prey consumption and led to an overestimation of the daily and annual prey consumption.

  7. The High Cost of Complexity in Experimental Design and Data Analysis: Type I and Type II Error Rates in Multiway ANOVA.

    ERIC Educational Resources Information Center

    Smith, Rachel A.; Levine, Timothy R.; Lachlan, Kenneth A.; Fediuk, Thomas A.

    2002-01-01

    Notes that the availability of statistical software packages has led to a sharp increase in use of complex research designs and complex statistical analyses in communication research. Reports a series of Monte Carlo simulations which demonstrate that this complexity may come at a heavier cost than many communication researchers realize. Warns…

  8. Publisher Correction: Quantum engineering of transistors based on 2D materials heterostructures

    NASA Astrophysics Data System (ADS)

    Iannaccone, Giuseppe; Bonaccorso, Francesco; Colombo, Luigi; Fiori, Gianluca

    2018-06-01

    In the version of this Perspective originally published, in the email address for the author Giuseppe Iannaccone, the surname was incorrectly given as "innaconne"; this has now been corrected in all versions of the Perspective. Also, an error in the production process led to Figs. 1, 2 and 3 being of low resolution; these have now been replaced with higher-quality versions.

  9. Meiotic Divisions: No Place for Gender Equality.

    PubMed

    El Yakoubi, Warif; Wassmann, Katja

    2017-01-01

    In multicellular organisms the fusion of two gametes with a haploid set of chromosomes leads to the formation of the zygote, the first cell of the embryo. Accurate execution of the meiotic cell division to generate a female and a male gamete is required for the generation of healthy offspring harboring the correct number of chromosomes. Unfortunately, meiosis is error prone. This has severe consequences for fertility and under certain circumstances, health of the offspring. In humans, female meiosis is extremely error prone. In this chapter we will compare male and female meiosis in humans to illustrate why and at which frequency errors occur, and describe how this affects pregnancy outcome and health of the individual. We will first introduce key notions of cell division in meiosis and how they differ from mitosis, followed by a detailed description of the events that are prone to errors during the meiotic divisions.

  10. [Relations between health information systems and patient safety].

    PubMed

    Nøhr, Christian

    2012-11-05

    Health information systems have the potential to reduce medical errors, and indeed many studies have shown a significant reduction. However, if the systems are not designed and implemented properly, there is evidence that suggest that new types of errors will arise--i.e., technology-induced errors. Health information systems will need to undergo a more rigorous evaluation. Usability evaluation and simulation test with humans in the loop can help to detect and prevent technology-induced errors before they are deployed in real health-care settings.

  11. Being watched by others eliminates the effect of emotional arousal on inhibitory control.

    PubMed

    Yu, Jiaxin; Tseng, Philip; Muggleton, Neil G; Juan, Chi-Hung

    2015-01-01

    The psychological effect of being watched by others has been proven a powerful tool in modulating social behaviors (e.g., charitable giving) and altering cognitive performance (e.g., visual search). Here we tested whether such awareness would affect one of the core elements of human cognition: emotional processing and impulse control. Using an emotion stop-signal paradigm, we found that viewing emotionally-arousing erotic images before attempting to inhibit a motor response impaired participants' inhibition ability, but such an impairing effect was completely eliminated when participants were led to believe that their facial expressions were monitored by a webcam. Furthermore, there was no post-error slowing in any of the conditions, thus these results cannot be explained by a deliberate speed-accuracy tradeoff or other types of conscious shift in strategy. Together, these findings demonstrate that the interaction between emotional arousal and impulse control can be dependent on one's state of self-consciousness. Furthermore, this study also highlights the effect that the mere presence of the experimenter may have on participants' cognitive performance, even if it's only a webcam.

  12. Lessons from the Salk Polio Vaccine: Methods for and Risks of Rapid Translation

    PubMed Central

    Juskewitch, B.A., Justin E.; Tapia, B.A., Carmen J.; Windebank, Anthony J.

    2010-01-01

    Abstract The Salk inactivated poliovirus vaccine is one of the most rapid examples of bench‐to‐bedside translation in medicine. In the span of 6 years, the key basic lab discoveries facilitating the development of the vaccine were made, optimization and safety testing was completed in both animals and human volunteers, the largest clinical trial in history of 1.8 million children was conducted, and the results were released to an eagerly awaiting public. Such examples of rapid translation cannot only offer clues to what factors can successfully drive and accelerate the translational process but also what mistakes can occur (and thus should be avoided) during such a swift process. In this commentary, we explore the translational path of the Salk polio vaccine from the key basic science discoveries to the 1954 Field Trials and delve into the scientific and sociopolitical factors that aided in its rapid development. Moreover, we look at the Cutter and Wyeth incidents after the vaccine’s approval and the errors that led to them. Clin Trans Sci 2010; Volume 3: 182–185 PMID:20718820

  13. Being watched by others eliminates the effect of emotional arousal on inhibitory control

    PubMed Central

    Yu, Jiaxin; Tseng, Philip; Muggleton, Neil G.; Juan, Chi-Hung

    2015-01-01

    The psychological effect of being watched by others has been proven a powerful tool in modulating social behaviors (e.g., charitable giving) and altering cognitive performance (e.g., visual search). Here we tested whether such awareness would affect one of the core elements of human cognition: emotional processing and impulse control. Using an emotion stop-signal paradigm, we found that viewing emotionally-arousing erotic images before attempting to inhibit a motor response impaired participants’ inhibition ability, but such an impairing effect was completely eliminated when participants were led to believe that their facial expressions were monitored by a webcam. Furthermore, there was no post-error slowing in any of the conditions, thus these results cannot be explained by a deliberate speed-accuracy tradeoff or other types of conscious shift in strategy. Together, these findings demonstrate that the interaction between emotional arousal and impulse control can be dependent on one’s state of self-consciousness. Furthermore, this study also highlights the effect that the mere presence of the experimenter may have on participants’ cognitive performance, even if it’s only a webcam. PMID:25653635

  14. Glare effect for three types of street lamps based on White LEDs

    NASA Astrophysics Data System (ADS)

    Sun, Ching-Cherng; Jiang, Chong-Jhih; Chen, Yi-Chun; Yang, Tsung-Hsun

    2014-05-01

    This study is aimed to assess the glare effect from LED-based street lamps with three general optical designs, which are cluster LEDs with a single lens, a LED array accompany with a lens array, and a tilted LED array, respectively. Observation conditions were simulated based on various locations and viewing axes. Equivalent luminance calculations were used to reveal the glare levels of the three designs. The age effect for the calculated equivalent luminance was also examined for human eyes of people at the age of 40 or 60. The results demonstrate that among the three design types, a LED array accompany with a lens array causes relatively smaller glare for most viewing conditions.

  15. Anatomy of an incident

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cournoyer, Michael E.; Trujillo, Stanley; Lawton, Cindy M.

    A traditional view of incidents is that they are caused by shortcomings in human competence, attention, or attitude. It may be under the label of “loss of situational awareness,” procedure “violation,” or “poor” management. A different view is that human error is not the cause of failure, but a symptom of failure – trouble deeper inside the system. In this perspective, human error is not the conclusion, but rather the starting point of investigations. During an investigation, three types of information are gathered: physical, documentary, and human (recall/experience). Through the causal analysis process, apparent cause or apparent causes are identifiedmore » as the most probable cause or causes of an incident or condition that management has the control to fix and for which effective recommendations for corrective actions can be generated. A causal analysis identifies relevant human performance factors. In the following presentation, the anatomy of a radiological incident is discussed, and one case study is presented. We analyzed the contributing factors that caused a radiological incident. When underlying conditions, decisions, actions, and inactions that contribute to the incident are identified. This includes weaknesses that may warrant improvements that tolerate error. Measures that reduce consequences or likelihood of recurrence are discussed.« less

  16. Anatomy of an incident

    DOE PAGES

    Cournoyer, Michael E.; Trujillo, Stanley; Lawton, Cindy M.; ...

    2016-03-23

    A traditional view of incidents is that they are caused by shortcomings in human competence, attention, or attitude. It may be under the label of “loss of situational awareness,” procedure “violation,” or “poor” management. A different view is that human error is not the cause of failure, but a symptom of failure – trouble deeper inside the system. In this perspective, human error is not the conclusion, but rather the starting point of investigations. During an investigation, three types of information are gathered: physical, documentary, and human (recall/experience). Through the causal analysis process, apparent cause or apparent causes are identifiedmore » as the most probable cause or causes of an incident or condition that management has the control to fix and for which effective recommendations for corrective actions can be generated. A causal analysis identifies relevant human performance factors. In the following presentation, the anatomy of a radiological incident is discussed, and one case study is presented. We analyzed the contributing factors that caused a radiological incident. When underlying conditions, decisions, actions, and inactions that contribute to the incident are identified. This includes weaknesses that may warrant improvements that tolerate error. Measures that reduce consequences or likelihood of recurrence are discussed.« less

  17. Token reinforcement, choice, and self-control in pigeons.

    PubMed Central

    Jackson, K; Hackenberg, T D

    1996-01-01

    Pigeons were exposed to self-control procedures that involved illumination of light-emitting diodes (LEDs) as a form of token reinforcement. In a discrete-trials arrangement, subjects chose between one and three LEDs; each LED was exchangeable for 2-s access to food during distinct posttrial exchange periods. In Experiment 1, subjects generally preferred the immediate presentation of a single LED over the delayed presentation of three LEDs, but differences in the delay to the exchange period between the two options prevented a clear assessment of the relative influence of LED delay and exchange-period delay as determinants of choice. In Experiment 2, in which delays to the exchange period from either alternative were equal in most conditions, all subjects preferred the delayed three LEDs more often than in Experiment-1. In Experiment 3, subjects preferred the option that resulted in a greater amount of food more often if the choices also produced LEDs than if they did not. In Experiment 4, preference for the delayed three LEDs was obtained when delays to the exchange period were equal, but reversed in favor of an immediate single LED when the latter choice also resulted in quicker access to exchange periods. The overall pattern of results suggests that (a) delay to the exchange period is a more critical determinant of choice than is delay to token presentation; (b) tokens may function as conditioned reinforcers, although their discriminative properties may be responsible for the self-control that occurs under token reinforcer arrangements; and (c) previously reported differences in the self-control choices of humans and pigeons may have resulted at least in part from the procedural conventions of using token reinforcers with human subjects and food reinforcers with pigeon subjects. PMID:8755699

  18. Time-Sharing-Based Synchronization and Performance Evaluation of Color-Independent Visual-MIMO Communication.

    PubMed

    Kwon, Tae-Ho; Kim, Jai-Eun; Kim, Ki-Doo

    2018-05-14

    In the field of communication, synchronization is always an important issue. The communication between a light-emitting diode (LED) array (LEA) and a camera is known as visual multiple-input multiple-output (MIMO), for which the data transmitter and receiver must be synchronized for seamless communication. In visual-MIMO, LEDs generally have a faster data rate than the camera. Hence, we propose an effective time-sharing-based synchronization technique with its color-independent characteristics providing the key to overcome this synchronization problem in visual-MIMO communication. We also evaluated the performance of our synchronization technique by varying the distance between the LEA and camera. A graphical analysis is also presented to compare the symbol error rate (SER) at different distances.

  19. Application of manual control theory to the study of biological stress

    NASA Technical Reports Server (NTRS)

    Replogle, C. R.; Holden, F. M.; Iay, C. N.

    1972-01-01

    A study was run using both a stable, third-order task and an adaptive first-order unstable task singly and in combination to test the effects of 2 min hypoxia (22000 ft) on human operator. The results indicate that the RMS error in the stable task does not change as a function of hypoxic stress whereas the error in an unstable task changes significantly. Models involving human operator parameter changes and noise injection are discussed.

  20. 21st Century Human Performance.

    ERIC Educational Resources Information Center

    Clark, Ruth Colvin

    1995-01-01

    Technology can extend human memory and improve performance, but bypassing human intelligence has its dangers. Cognitive apprenticeships that compress learning experiences, provide coaching, and allow trial and error can build complex problem-solving skills and develop expertise. (SK)

  1. ERRATUM: "Beating the Spin-Down Limit on Gravitational Wave Emission from the Crab Pulsar" (2008, ApJ, 683, L45)

    NASA Astrophysics Data System (ADS)

    Abbott, B.; Abbott, R.; Adhikari, R.; Ajith, P.; Allen, B.; Allen, G.; Amin, R.; Anderson, S. B.; Anderson, W. G.; Arain, M. A.; Araya, M.; Armandula, H.; Armor, P.; Aso, Y.; Aston, S.; Aufmuth, P.; Aulbert, C.; Babak, S.; Ballmer, S.; Bantilan, H.; Barish, B. C.; Barker, C.; Barker, D.; Barr, B.; Barriga, P.; Barton, M. A.; Bastarrika, M.; Bayer, K.; Betzwieser, J.; Beyersdorf, P. T.; Bilenko, I. A.; Billingsley, G.; Biswas, R.; Black, E.; Blackburn, K.; Blackburn, L.; Blair, D.; Bland, B.; Bodiya, T. P.; Bogue, L.; Bork, R.; Boschi, V.; Bose, S.; Brady, P. R.; Braginsky, V. B.; Brau, J. E.; Brinkmann, M.; Brooks, A.; Brown, D. A.; Brunet, G.; Bullington, A.; Buonanno, A.; Burmeister, O.; Byer, R. L.; Cadonati, L.; Cagnoli, G.; Camp, J. B.; Cannizzo, J.; Cannon, K.; Cao, J.; Cardenas, L.; Casebolt, T.; Castaldi, G.; Cepeda, C.; Chalkley, E.; Charlton, P.; Chatterji, S.; Chelkowski, S.; Chen, Y.; Christensen, N.; Clark, D.; Clark, J.; Cokelaer, T.; Conte, R.; Cook, D.; Corbitt, T.; Coyne, D.; Creighton, J. D. E.; Cumming, A.; Cunningham, L.; Cutler, R. M.; Dalrymple, J.; Danzmann, K.; Davies, G.; De Bra, D.; Degallaix, J.; Degree, M.; Dergachev, V.; Desai, S.; De Salvo, R.; Dhurandhar, S.; Díaz, M.; Dickson, J.; Dietz, A.; Donovan, F.; Dooley, K. L.; Doomes, E. E.; Drever, R. W. P.; Duke, I.; Dumas, J.-C.; Dupuis, R. J.; Dwyer, J. G.; Echols, C.; Effler, A.; Ehrens, P.; Espinoza, E.; Etzel, T.; Evans, T.; Fairhurst, S.; Fan, Y.; Fazi, D.; Fehrmann, H.; Fejer, M. M.; Finn, L. S.; Flasch, K.; Fotopoulos, N.; Freise, A.; Frey, R.; Fricke, T.; Fritschel, P.; Frolov, V. V.; Fyffe, M.; Garofoli, J.; Gholami, I.; Giaime, J. A.; Giampanis, S.; Giardina, K. D.; Goda, K.; Goetz, E.; Goggin, L.; González, G.; Gossler, S.; Gouaty, R.; Grant, A.; Gras, S.; Gray, C.; Gray, M.; Greenhalgh, R. J. S.; Gretarsson, A. M.; Grimaldi, F.; Grosso, R.; Grote, H.; Grunewald, S.; Guenther, M.; Gustafson, E. K.; Gustafson, R.; Hage, B.; Hallam, J. M.; Hammer, D.; Hanna, C.; Hanson, J.; Harms, J.; Harry, G.; Harstad, E.; Hayama, K.; Hayler, T.; Heefner, J.; Heng, I. S.; Hennessy, M.; Heptonstall, A.; Hewitson, M.; Hild, S.; Hirose, E.; Hoak, D.; Hosken, D.; Hough, J.; Huttner, S. H.; Ingram, D.; Ito, M.; Ivanov, A.; Johnson, B.; Johnson, W. W.; Jones, D. I.; Jones, G.; Jones, R.; Ju, L.; Kalmus, P.; Kalogera, V.; Kamat, S.; Kanner, J.; Kasprzyk, D.; Katsavounidis, E.; Kawabe, K.; Kawamura, S.; Kawazoe, F.; Kells, W.; Keppel, D. G.; Khalili, F. Ya.; Khan, R.; Khazanov, E.; Kim, C.; King, P.; Kissel, J. S.; Klimenko, S.; Kokeyama, K.; Kondrashov, V.; Kopparapu, R. K.; Kozak, D.; Kozhevatov, I.; Krishnan, B.; Kwee, P.; Lam, P. K.; Landry, M.; Lang, M. M.; Lantz, B.; Lazzarini, A.; Lei, M.; Leindecker, N.; Leonhardt, V.; Leonor, I.; Libbrecht, K.; Lin, H.; Lindquist, P.; Lockerbie, N. A.; Lodhia, D.; Lormand, M.; Lu, P.; Lubiński, M.; Lucianetti, A.; Lück, H.; Machenschalk, B.; Mac Innis, M.; Mageswaran, M.; Mailand, K.; Mandic, V.; Márka, S.; Márka, Z.; Markosyan, A.; Markowitz, J.; Maros, E.; Martin, I.; Martin, R. M.; Marx, J. N.; Mason, K.; Matichard, F.; Matone, L.; Matzner, R.; Mavalvala, N.; McCarthy, R.; McClelland, D. E.; McGuire, S. C.; McHugh, M.; McIntyre, G.; McIvor, G.; McKechan, D.; McKenzie, K.; Meier, T.; Melissinos, A.; Mendell, G.; Mercer, R. A.; Meshkov, S.; Messenger, C. J.; Meyers, D.; Miller, J.; Minelli, J.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Miyakawa, O.; Moe, B.; Mohanty, S.; Moreno, G.; Mossavi, K.; Lowry, C. Mow; Mueller, G.; Mukherjee, S.; Mukhopadhyay, H.; Müller-Ebhardt, H.; Munch, J.; Murray, P.; Myers, E.; Myers, J.; Nash, T.; Nelson, J.; Newton, G.; Nishizawa, A.; Numata, K.; O'Dell, J.; Ogin, G.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Ottens, R. S.; Overmier, H.; Owen, B. J.; Pan, Y.; Pankow, C.; Papa, M. A.; Parameshwaraiah, V.; Patel, P.; Pedraza, M.; Penn, S.; Perreca, A.; Petrie, T.; Pinto, I. M.; Pitkin, M.; Pletsch, H. J.; Plissi, M. V.; Postiglione, F.; Principe, M.; Prix, R.; Quetschke, V.; Raab, F.; Rabeling, D. S.; Radkins, H.; Rainer, N.; Rakhmanov, M.; Ramsunder, M.; Rehbein, H.; Reid, S.; Reitze, D. H.; Riesen, R.; Riles, K.; Rivera, B.; Robertson, N. A.; Robinson, C.; Robinson, E. L.; Roddy, S.; Rodriguez, A.; Rogan, A. M.; Rollins, J.; Romano, J. D.; Romie, J.; Route, R.; Rowan, S.; Rüdiger, A.; Ruet, L.; Russell, P.; Ryan, K.; Sakata, S.; Samidi, M.; Sancho de la Jordana, L.; Sandberg, V.; Sannibale, V.; Saraf, S.; Sarin, P.; Sathyaprakash, B. S.; Sato, S.; Saulson, P. R.; Savage, R.; Savov, P.; Schediwy, S. W.; Schilling, R.; Schnabel, R.; Schofield, R.; Schutz, B. F.; Schwinberg, P.; Scott, S. M.; Searle, A. C.; Sears, B.; Seifert, F.; Sellers, D.; Sengupta, A. S.; Shawhan, P.; Shoemaker, D. H.; Sibley, A.; Siemens, X.; Sigg, D.; Sinha, S.; Sintes, A. M.; Slagmolen, B. J. J.; Slutsky, J.; Smith, J. R.; Smith, M. R.; Smith, N. D.; Somiya, K.; Sorazu, B.; Stein, L. C.; Stochino, A.; Stone, R.; Strain, K. A.; Strom, D. M.; Stuver, A.; Summerscales, T. Z.; Sun, K.-X.; Sung, M.; Sutton, P. J.; Takahashi, H.; Tanner, D. B.; Taylor, R.; Taylor, R.; Thacker, J.; Thorne, K. A.; Thorne, K. S.; Thüring, A.; Tokmakov, K. V.; Torres, C.; Torrie, C.; Traylor, G.; Trias, M.; Tyler, W.; Ugolini, D.; Ulmen, J.; Urbanek, K.; Vahlbruch, H.; Van Den Broeck, C.; van der Sluys, M.; Vass, S.; Vaulin, R.; Vecchio, A.; Veitch, J.; Veitch, P.; Villar, A.; Vorvick, C.; Vyachanin, S. P.; Waldman, S. J.; Wallace, L.; Ward, H.; Ward, R.; Weinert, M.; Weinstein, A.; Weiss, R.; Wen, S.; Wette, K.; Whelan, J. T.; Whitcomb, S. E.; Whiting, B. F.; Wilkinson, C.; Willems, P. A.; Williams, H. R.; Williams, L.; Willke, B.; Wilmut, I.; Winkler, W.; Wipf, C. C.; Wiseman, A. G.; Woan, G.; Wooley, R.; Worden, J.; Wu, W.; Yakushin, I.; Yamamoto, H.; Yan, Z.; Yoshida, S.; Zanolin, M.; Zhang, J.; Zhang, L.; Zhao, C.; Zotov, N.; Zucker, M.; Zweizig, J.; LIGO Scientific Collaboration; Santostasi, G.

    2009-11-01

    A processing error in the signal template used in this search led to upper limits about 30% lower than we now know is warranted by the early S5 data. We have re-analyzed that data and find new upper limits on the strain parameter h 0 of 4.9 × 10-25/3.9 × 10-25 for uniform/restricted prior assumptions concerning the Crab inclination and polarization angles. These results have now been superseded by upper limits of 2.6 × 10-25/2.0 × 10-25 based on the full S5 data and presented in Abbott et al. (2009). The multitemplate search was not affected by the error.

  2. The antihypertensive effect of calorie restriction in obese adolescents: dissociation of effects on erythrocyte countertransport and cotransport.

    PubMed

    Weder, A B; Torretti, B A; Katch, V L; Rocchini, A P

    1984-10-01

    Measures of maximal rates of lithium-sodium countertransport and frusemide-sensitive sodium and potassium cotransport have been proposed as biochemical markers for human essential hypertension. The stability of these functions over time within the same individuals has led to the suggestion that maximal transport capacities are genetically determined. The present study confirms the reproducibility of functional assays of countertransport and cotransport in human erythrocytes after overnight storage and over a six-month period in normal volunteers and provides estimates of the magnitude of technical error for each assay. A long-term dietary intervention study in a group of obese adolescents demonstrated marked increases in erythrocyte sodium levels and maximal frusemide-sensitive sodium and potassium fluxes but no changes in cell potassium or water and no effect on lithium-sodium countertransport. A correlation between the decrease in percentage of body fat and the increase in cell sodium content suggests a link between the metabolic effects of dieting and control of erythrocyte cation handling. Although the mechanism linking dietary calorie restriction and changes in erythrocyte cation metabolism is unknown, evaluation of body weight, and especially recent weight loss, is important in studies of erythrocyte transport. Conclusions regarding genetic contributions to the activities of lithium-sodium countertransport and sodium-potassium cotransport systems will be strengthened by clarification of environmental regulators.

  3. Illumination-parameter adjustable and illumination-distribution visible LED helmet for low-level light therapy on brain injury

    NASA Astrophysics Data System (ADS)

    Wang, Pengbo; Gao, Yuan; Chen, Xiao; Li, Ting

    2016-03-01

    Low-level light therapy (LLLT) has been clinically applied. Recently, more and more cases are reported with positive therapeutic effect by using transcranial light emitting diodes (LEDs) illumination. Here, we developed a LLLT helmet for treating brain injuries based on LED arrays. We designed the LED arrays in circle shape and assembled them in multilayered 3D printed helmet with water-cooling module. The LED arrays can be adjust to touch the head of subjects. A control circuit was developed to drive and control the illumination of the LLLT helmet. The software portion provides the control of on and off of each LED arrays, the setup of illumination parameters, and 3D distribution of LLLT light dose in human subject according to the illumination setups. This LLLT light dose distribution was computed by a Monte Carlo model for voxelized media and the Visible Chinese Human head dataset and displayed in 3D view at the background of head anatomical structure. The performance of the whole system was fully tested. One stroke patient was recruited in the preliminary LLLT experiment and the following neuropsychological testing showed obvious improvement in memory and executive functioning. This clinical case suggested the potential of this Illumination-parameter adjustable and illuminationdistribution visible LED helmet as a reliable, noninvasive, and effective tool in treating brain injuries.

  4. Practical Application of PRA as an Integrated Design Tool for Space Systems

    NASA Technical Reports Server (NTRS)

    Kalia, Prince; Shi, Ying; Pair, Robin; Quaney, Virginia; Uhlenbrock, John

    2013-01-01

    This paper presents the application of the first comprehensive Probabilistic Risk Assessment (PRA) during the design phase of a joint NASA/NOAA weather satellite program, Geostationary Operational Environmental Satellite Series R (GOES-R). GOES-R is the next generation weather satellite primarily to help understand the weather and help save human lives. PRA has been used at NASA for Human Space Flight for many years. PRA was initially adopted and implemented in the operational phase of manned space flight programs and more recently for the next generation human space systems. Since its first use at NASA, PRA has become recognized throughout the Agency as a method of assessing complex mission risks as part of an overall approach to assuring safety and mission success throughout project lifecycles. PRA is now included as a requirement during the design phase of both NASA next generation manned space vehicles as well as for high priority robotic missions. The influence of PRA on GOES-R design and operation concepts are discussed in detail. The GOES-R PRA is unique at NASA for its early implementation. It also represents a pioneering effort to integrate risks from both Spacecraft (SC) and Ground Segment (GS) to fully assess the probability of achieving mission objectives. PRA analysts were actively involved in system engineering and design engineering to ensure that a comprehensive set of technical risks were correctly identified and properly understood from a design and operations perspective. The analysis included an assessment of SC hardware and software, SC fault management system, GS hardware and software, common cause failures, human error, natural hazards, solar weather and infrastructure (such as network and telecommunications failures, fire). PRA findings directly resulted in design changes to reduce SC risk from micro-meteoroids. PRA results also led to design changes in several SC subsystems, e.g. propulsion, guidance, navigation and control (GNC), communications, mechanisms, and command and data handling (C&DH). The fault tree approach assisted in the development of the fault management system design. Human error analysis, which examined human response to failure, indicated areas where automation could reduce the overall probability of gaps in operation by half. In addition, the PRA brought to light many potential root causes of system disruptions, including earthquakes, inclement weather, solar storms, blackouts and other extreme conditions not considered in the typical reliability and availability analyses. Ultimately the PRA served to identify potential failures that, when mitigated, resulted in a more robust design, as well as to influence the program's concept of operations. The early and active integration of PRA with system and design engineering provided a well-managed approach for risk assessment that increased reliability and availability, optimized lifecyc1e costs, and unified the SC and GS developments.

  5. Metacognitive unawareness of the errorful generation benefit and its effects on self-regulated learning.

    PubMed

    Yang, Chunliang; Potts, Rosalind; Shanks, David R

    2017-07-01

    Generating errors followed by corrective feedback enhances retention more effectively than does reading-the benefit of errorful generation-but people tend to be unaware of this benefit. The current research explored this metacognitive unawareness, its effect on self-regulated learning, and how to alleviate or reverse it. People's beliefs about the relative learning efficacy of generating errors followed by corrective feedback compared to reading, and the effects of generation fluency, are also explored. In Experiments 1 and 2, lower judgments of learning (JOLs) were consistently given to incorrectly generated word pairs than to studied (read) pairs and led participants to distribute more study resources to incorrectly generated pairs, even though superior recall of these pairs was exhibited in the final test. In Experiment 3, a survey revealed that people believe that generating errors followed by corrective feedback is inferior to reading. Experiment 4 was designed to alter participants' metacognition by informing them of the errorful generation benefit prior to study. Although metacognitive misalignment was partly countered, participants still tended to be unaware of this benefit when making item-by-item JOLs. In Experiment 5, in a delayed JOL condition, higher JOLs were given to incorrectly generated pairs and read pairs were more likely to be selected for restudy. The current research reveals that people tend to underestimate the learning efficiency of generating errors followed by corrective feedback relative to reading when making immediate item-by-item JOLs. Informing people of the errorful generation benefit prior to study and asking them to make delayed JOLs are effective ways to alleviate this metacognitive miscalibration. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  6. A prospective audit of a nurse independent prescribing within critical care.

    PubMed

    Carberry, Martin; Connelly, Sarah; Murphy, Jennifer

    2013-05-01

    To determine the prescribing activity of different staff groups within intensive care unit (ICU) and combined high dependency unit (HDU), namely trainee and consultant medical staff and advanced nurse practitioners in critical care (ANPCC); to determine the number and type of prescription errors; to compare error rates between prescribing groups and to raise awareness of prescribing activity within critical care. The introduction of government legislation has led to the development of non-medical prescribing roles in acute care. This has facilitated an opportunity for the ANPCC working in critical care to develop a prescribing role. The audit was performed over 7 days (Monday-Sunday), on rolling days over a 7-week period in September and October 2011 in three ICUs. All drug entries made on the ICU prescription by the three groups, trainee medical staff, ANPCCs and consultant anaesthetists, were audited once for errors. Data were collected by reviewing all drug entries for errors namely, patient data, drug dose, concentration, rate and frequency, legibility and prescriber signature. A paper data collection tool was used initially; data was later entered onto a Microsoft Access data base. A total of 1418 drug entries were audited from 77 patient prescription Cardexes. Error rates were reported as, 40 errors in 1418 prescriptions (2·8%): ANPCC errors, n = 2 in 388 prescriptions (0·6%); trainee medical staff errors, n = 33 in 984 (3·4%); consultant errors, n = 5 in 73 (6·8%). The error rates were significantly different for different prescribing groups (p < 0·01). This audit shows that prescribing error rates were low (2·8%). Having the lowest error rate, the nurse practitioners are at least as effective as other prescribing groups within this audit, in terms of errors only, in prescribing diligence. National data is required in order to benchmark independent nurse prescribing practice in critical care. These findings could be used to inform research and role development within the critical care. © 2012 The Authors. Nursing in Critical Care © 2012 British Association of Critical Care Nurses.

  7. Collimation testing using slit Fresnel diffraction

    NASA Astrophysics Data System (ADS)

    Luo, Xiaohe; Hui, Mei; Wang, Shanshan; Hou, Yinlong; Zhou, Siyu; Zhu, Qiudong

    2018-03-01

    A simple collimation testing method based on slit Fresnel diffraction is proposed. The method needs only a CMOS and a slit with no requirement in dimensional accuracy. The light beam to be tested diffracts across the slit and forms a Fresnel diffraction pattern received by CMOS. After analysis, the defocusing amount and the distance between the primary peak point and secondary peak point of diffraction pattern fulfill an expression relationship and then the defocusing amount can be deduced from the expression. The method is applied to both the coherent beam and partially coherent beam, and these two beams are emitted from a laser and light-emitting diode (LED) with a spectrum width of about 50 nm in this paper. Simulations show that the wide spectrum of LED has the effect of smooth filtering to provide higher accuracy. Experiments show that the LED with a spectrum width of about 50 nm has a lower limitation error than the laser and can achieve up to 58.1601 μm with focal length 200 mm and slit width 15 mm.

  8. 1Mbps NLOS solar-blind ultraviolet communication system based on UV-LED array

    NASA Astrophysics Data System (ADS)

    Sun, Zhaotian; Zhang, Lijun; Li, Ping'an; Qin, Yu; Bai, Tingzhu

    2018-01-01

    We proposed and demonstrated a high data rate ultraviolet communication system based on a 266nm UV LED array with 50mW luminous power. The emitting source is driven by a three outputs constant-current control circuit, whose driving speed is up to 2Mbps. At the receiving side, in order to achieve the amplification for high-speed signal, a two-stage differential preamplifier is designed to make I-V conversion. The voltage-current gain is up to 140dB and bandwidth is 1.9MHz. An experiment is conducted to test the performance of the UV communication system. The effects of elevation angles and transmission distance are analyzed. It is shown that the ultraviolet communication system has high data rate of up to 921.6kbps and bit error rate of less than 10-7 in 150m, which can beat the best record created by UV-LED communication system in terms of the transmission rate.

  9. Target Uncertainty Mediates Sensorimotor Error Correction

    PubMed Central

    Vijayakumar, Sethu; Wolpert, Daniel M.

    2017-01-01

    Human movements are prone to errors that arise from inaccuracies in both our perceptual processing and execution of motor commands. We can reduce such errors by both improving our estimates of the state of the world and through online error correction of the ongoing action. Two prominent frameworks that explain how humans solve these problems are Bayesian estimation and stochastic optimal feedback control. Here we examine the interaction between estimation and control by asking if uncertainty in estimates affects how subjects correct for errors that may arise during the movement. Unbeknownst to participants, we randomly shifted the visual feedback of their finger position as they reached to indicate the center of mass of an object. Even though participants were given ample time to compensate for this perturbation, they only fully corrected for the induced error on trials with low uncertainty about center of mass, with correction only partial in trials involving more uncertainty. The analysis of subjects’ scores revealed that participants corrected for errors just enough to avoid significant decrease in their overall scores, in agreement with the minimal intervention principle of optimal feedback control. We explain this behavior with a term in the loss function that accounts for the additional effort of adjusting one’s response. By suggesting that subjects’ decision uncertainty, as reflected in their posterior distribution, is a major factor in determining how their sensorimotor system responds to error, our findings support theoretical models in which the decision making and control processes are fully integrated. PMID:28129323

  10. Target Uncertainty Mediates Sensorimotor Error Correction.

    PubMed

    Acerbi, Luigi; Vijayakumar, Sethu; Wolpert, Daniel M

    2017-01-01

    Human movements are prone to errors that arise from inaccuracies in both our perceptual processing and execution of motor commands. We can reduce such errors by both improving our estimates of the state of the world and through online error correction of the ongoing action. Two prominent frameworks that explain how humans solve these problems are Bayesian estimation and stochastic optimal feedback control. Here we examine the interaction between estimation and control by asking if uncertainty in estimates affects how subjects correct for errors that may arise during the movement. Unbeknownst to participants, we randomly shifted the visual feedback of their finger position as they reached to indicate the center of mass of an object. Even though participants were given ample time to compensate for this perturbation, they only fully corrected for the induced error on trials with low uncertainty about center of mass, with correction only partial in trials involving more uncertainty. The analysis of subjects' scores revealed that participants corrected for errors just enough to avoid significant decrease in their overall scores, in agreement with the minimal intervention principle of optimal feedback control. We explain this behavior with a term in the loss function that accounts for the additional effort of adjusting one's response. By suggesting that subjects' decision uncertainty, as reflected in their posterior distribution, is a major factor in determining how their sensorimotor system responds to error, our findings support theoretical models in which the decision making and control processes are fully integrated.

  11. Analysis of methods commonly used in biomedicine for treatment versus control comparison of very small samples.

    PubMed

    Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M

    2018-04-01

    A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Apoplastic water fraction and rehydration techniques introduce significant errors in measurements of relative water content and osmotic potential in plant leaves.

    PubMed

    Arndt, Stefan K; Irawan, Andi; Sanders, Gregor J

    2015-12-01

    Relative water content (RWC) and the osmotic potential (π) of plant leaves are important plant traits that can be used to assess drought tolerance or adaptation of plants. We estimated the magnitude of errors that are introduced by dilution of π from apoplastic water in osmometry methods and the errors that occur during rehydration of leaves for RWC and π in 14 different plant species from trees, grasses and herbs. Our data indicate that rehydration technique and length of rehydration can introduce significant errors in both RWC and π. Leaves from all species were fully turgid after 1-3 h of rehydration and increasing the rehydration time resulted in a significant underprediction of RWC. Standing rehydration via the petiole introduced the least errors while rehydration via floating disks and submerging leaves for rehydration led to a greater underprediction of RWC. The same effect was also observed for π. The π values following standing rehydration could be corrected by applying a dilution factor from apoplastic water dilution using an osmometric method but not by using apoplastic water fraction (AWF) from pressure volume (PV) curves. The apoplastic water dilution error was between 5 and 18%, while the two other rehydration methods introduced much greater errors. We recommend the use of the standing rehydration method because (1) the correct rehydration time can be evaluated by measuring water potential, (2) overhydration effects were smallest, and (3) π can be accurately corrected by using osmometric methods to estimate apoplastic water dilution. © 2015 Scandinavian Plant Physiology Society.

  13. Electrocardiograms with pacemakers: accuracy of computer reading.

    PubMed

    Guglin, Maya E; Datwani, Neeta

    2007-04-01

    We analyzed the accuracy with which a computer algorithm reads electrocardiograms (ECGs) with electronic pacemakers (PMs). Electrocardiograms were screened for the presence of electronic pacing spikes. Computer-derived interpretations were compared with cardiologists' readings. Computer-drawn interpretations required revision by cardiologists in 61.3% of cases. In 18.4% of cases, the ECG reading algorithm failed to recognize the presence of a PM. The misinterpretation of paced beats as intrinsic beats led to multiple secondary errors, including myocardial infarctions in varying localization. The most common error in computer reading was the failure to identify an underlying rhythm. This error caused frequent misidentification of the PM type, especially when the presence of normal sinus rhythm was not recognized in a tracing with a DDD PM tracking the atrial activity. The increasing number of pacing devices, and the resulting number of ECGs with pacing spikes, mandates the refining of ECG reading algorithms. Improvement is especially needed in the recognition of the underlying rhythm, pacing spikes, and mode of pacing.

  14. Proof of Heisenberg's error-disturbance relation.

    PubMed

    Busch, Paul; Lahti, Pekka; Werner, Reinhard F

    2013-10-18

    While the slogan "no measurement without disturbance" has established itself under the name of the Heisenberg effect in the consciousness of the scientifically interested public, a precise statement of this fundamental feature of the quantum world has remained elusive, and serious attempts at rigorous formulations of it as a consequence of quantum theory have led to seemingly conflicting preliminary results. Here we show that despite recent claims to the contrary [L. Rozema et al, Phys. Rev. Lett. 109, 100404 (2012)], Heisenberg-type inequalities can be proven that describe a tradeoff between the precision of a position measurement and the necessary resulting disturbance of momentum (and vice versa). More generally, these inequalities are instances of an uncertainty relation for the imprecisions of any joint measurement of position and momentum. Measures of error and disturbance are here defined as figures of merit characteristic of measuring devices. As such they are state independent, each giving worst-case estimates across all states, in contrast to previous work that is concerned with the relationship between error and disturbance in an individual state.

  15. In vitro results of flexible light-emitting antimicrobial bandage designed for prevention of surgical site infections

    NASA Astrophysics Data System (ADS)

    Greenberg, Mitchell; Sharan, Riti; Galbadage, Thushara; Sule, Preeti; Smith, Robert; Lovelady, April; Cirillo, Jeffrey D.; Glowczwski, Alan; Maitland, Kristen C.

    2018-02-01

    Surgical site infections (SSIs) are a leading cause of morbidity and mortality and a significant expense to the healthcare system and hospitals. The majority of these infections are preventable; however, increasing bacterial resistance, biofilm persistence, and human error contribute to the occurrence of these healthcare-associated infections. We present a flexible antimicrobial blue-light emitting bandage designed for use on postoperative incisions and wounds. The photonic device is designed to inactivate bacteria present on the skin and prevent bacterial colonization of the site, thus reducing the occurrence of SSIs. This antimicrobial light emitting bandage uses blue light's proven abilities to inactivate a wide range of clinical pathogens regardless of their resistance to antibiotics, inactivate bacteria without harming mammalian cells, improve wound healing, and inactivate bacteria in biofilms. The antimicrobial bandage consists of a thin 2"x2" silicone sheet with an array of 77 LEDs embedded in multiple layers of the material for thermal management. The 405 nm center wavelength LED array is designed to be a wearable device that integrates with standard hospital infection prevention protocols. The device was characterized for irradiance of 44.5 mW/cm2. Methicillin-resistant Staphylococcus aureus seeded in a petri dish was used to evaluate bacterial inactivation in vitro. Starting with a concentration of 2.16 x 107 colony forming units (CFU)/mL, 45% of the bacteria was inactivated within 15 minutes, 65% had been inactivated by 30 minutes, 99% was inactivated by 60 minutes, and a 7 log reduction and complete sterilization was achieved within 120 minutes.

  16. Mission Design Considerations for Mars Cargo of the Human Spaceflight Architecture Team's Evolvable Mars Campaign

    NASA Technical Reports Server (NTRS)

    Sjauw, Waldy K.; McGuire, Melissa L.; Freeh, Joshua E.

    2016-01-01

    Recent NASA interest in human missions to Mars has led to an Evolvable Mars Campaign by the agency's Human Architecture Team. Delivering the crew return propulsion stages and Mars surface landers, SEP based systems are employed because of their high specific impulse characteristics enabling missions requiring less propellant although with longer transfer times. The Earth departure trajectories start from an SLS launch vehicle delivery orbit and are spiral shaped because of the low SEP thrust. Previous studies have led to interest in assessing the divide in trip time between the Earth departure and interplanetary legs of the mission for a representative SEP cargo vehicle.

  17. Active tracking system for visible light communication using a GaN-based micro-LED and NRZ-OOK.

    PubMed

    Lu, Zhijian; Tian, Pengfei; Chen, Hong; Baranowski, Izak; Fu, Houqiang; Huang, Xuanqi; Montes, Jossue; Fan, Youyou; Wang, Hongyi; Liu, Xiaoyan; Liu, Ran; Zhao, Yuji

    2017-07-24

    Visible light communication (VLC) holds the promise of a high-speed wireless network for indoor applications and competes with 5G radio frequency (RF) system. Although the breakthrough of gallium nitride (GaN) based micro-light-emitting-diodes (micro-LEDs) increases the -3dB modulation bandwidth exceptionally from tens of MHz to hundreds of MHz, the light collected onto a fast photo receiver drops dramatically, which determines the signal to noise ratio (SNR) of VLC. To fully implement the practical high data-rate VLC link enabled by a GaN-based micro-LED, it requires focusing optics and a tracking system. In this paper, we demonstrate an active on-chip tracking system for VLC using a GaN-based micro-LED and none-return-to-zero on-off keying (NRZ-OOK). Using this novel technique, the field of view (FOV) was enlarged to 120° and data rates up to 600 Mbps at a bit error rate (BER) of 2.1×10 -4 were achieved without manual focusing. This paper demonstrates the establishment of a VLC physical link that shows enhanced communication quality by orders of magnitude, making it optimized for practical communication applications.

  18. LED light design method for high contrast and uniform illumination imaging in machine vision.

    PubMed

    Wu, Xiaojun; Gao, Guangming

    2018-03-01

    In machine vision, illumination is very critical to determine the complexity of the inspection algorithms. Proper lights can obtain clear and sharp images with the highest contrast and low noise between the interested object and the background, which is conducive to the target being located, measured, or inspected. Contrary to the empirically based trial-and-error convention to select the off-the-shelf LED light in machine vision, an optimization algorithm for LED light design is proposed in this paper. It is composed of the contrast optimization modeling and the uniform illumination technology for non-normal incidence (UINI). The contrast optimization model is built based on the surface reflection characteristics, e.g., the roughness, the reflective index, and light direction, etc., to maximize the contrast between the features of interest and the background. The UINI can keep the uniformity of the optimized lighting by the contrast optimization model. The simulation and experimental results demonstrate that the optimization algorithm is effective and suitable to produce images with the highest contrast and uniformity, which is very inspirational to the design of LED illumination systems in machine vision.

  19. An inter-lighting interference cancellation scheme for MISO-VLC systems

    NASA Astrophysics Data System (ADS)

    Kim, Kyuntak; Lee, Kyujin; Lee, Kyesan

    2017-08-01

    In this paper, we propose an inter-lighting interference cancellation (ILIC) scheme to reduce the interference between adjacent light-emitting diodes (LEDs) and enhance the transmission capacity of multiple-input-single-output (MISO)-visible light communication (VLC) systems. In indoor environments, multiple LEDs have normally been used as lighting sources, allowing the design of MISO-VLC systems. To enhance the transmission capacity, different data should be simultaneously transmitted from each LED; however, that can lead to interference between adjacent LEDs. In that case, relatively low-received power signals are subjected to large interference because wireless optical systems generally use intensity modulation and direct detection. Thus, only the signal with the highest received power can be detected, while the other received signals cannot be detected. To solve this problem, we propose the ILIC scheme for MISO-VLC systems. The proposed scheme preferentially detects the highest received power signal, and this signal is referred as interference signal by an interference component generator. Then, relatively low-received power signal can be detected by cancelling the interference signal from the total received signals. Therefore, the performance of the proposed scheme can improve the total average bit error rate and throughput of a MISO-VLC system.

  20. A statistical study of radio-source structure effects on astrometric very long baseline interferometry observations

    NASA Technical Reports Server (NTRS)

    Ulvestad, J. S.

    1989-01-01

    Errors from a number of sources in astrometric very long baseline interferometry (VLBI) have been reduced in recent years through a variety of methods of calibration and modeling. Such reductions have led to a situation in which the extended structure of the natural radio sources used in VLBI is a significant error source in the effort to improve the accuracy of the radio reference frame. In the past, work has been done on individual radio sources to establish the magnitude of the errors caused by their particular structures. The results of calculations on 26 radio sources are reported in which an effort is made to determine the typical delay and delay-rate errors for a number of sources having different types of structure. It is found that for single observations of the types of radio sources present in astrometric catalogs, group-delay and phase-delay scatter in the 50 to 100 psec range due to source structure can be expected at 8.4 GHz on the intercontinental baselines available in the Deep Space Network (DSN). Delay-rate scatter of approx. 5 x 10(exp -15) sec sec(exp -1) (or approx. 0.002 mm sec (exp -1) is also expected. If such errors mapped directly into source position errors, they would correspond to position uncertainties of approx. 2 to 5 nrad, similar to the best position determinations in the current JPL VLBI catalog. With the advent of wider bandwidth VLBI systems on the large DSN antennas, the system noise will be low enough so that the structure-induced errors will be a significant part of the error budget. Several possibilities for reducing the structure errors are discussed briefly, although it is likely that considerable effort will have to be devoted to the structure problem in order to reduce the typical error by a factor of two or more.

Top